# history of the Inventory Control Commerce Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Anticipation of demand is an essential activity of every business that requires inventory control. Determining the optimal forecasting policy plays a crucial role in Make-to-Stock(MTS)\nomenclature[]{MTS}{Make-to-Stock} production systems, since it triggers production decisions and influences levels of stock. In contrast to stockless production of Make-to-Order(MTO)\nomenclature[]{MTO}{Make-to-Order} systems, the accuracy of forecasting in MTS operations therefore influences company's cost management, by creating inventory costs and (if any) not fulfilled demand. The range of forecasting methods is large, comprising methods that are based on qualitative assessment, as well as techniques that rely on quantitative sources {\sc Jenkins (1976)}. Although academics have developed stable and understandable procedures that are likely to improve performance of supply chains, many business do not apply quantitative forecasting methods, as shown in forecasting bench markings, like Robert D. {\sc Klassen} and Benito E. {\sc Flores (2001)}. Rather than applying computational methods, inventory behavior of firms is characterized by less theory-based approaches, such as judgmental procedures or average-sized orders. Reasons for this behavior are not always easy to identify, but among these are lack of trust in and understanding of forecasting techniques and the negligibility of the performance boost, more advanced forecasting incurs. Due to this distortion between the advancement of forecasting methods and the lagging behind of their implementation by practitioners, questions of applicability and transfer of computational results in real environments have been aroused.

\bigskip

The purpose of this work is to evaluate the performance of a forecasting method based on exponential smoothing(ES)\nomenclature[]{ES}{Exponential smoothing} using samples of demand data computed by a supply chain simulation environment. Subsequently, the sensitivity analysis will help to provide evidence for the applicability of the method in practice and the . An inherent component of the quantitative analysis is the selection of the evaluation method of the forecasting techniques. This is due to the fact, that understanding the choice and the operating mode of the error measurement procedure is crucial to drawing conclusions about the applicability of a forecasting method. The insights gained will be used to assess the merit and implications on the revenue management of a firm of demand prediction using ES. In the second part of the analysis, the long-term change of demand level will give us an opportunity to assess the robustness and adaptability of ES. Change of a long-term demand pattern is a common challenge of companies in different industries, since its predictability is nearly impossible. A reason of a change in the average demand in one period can lie in both internal factors, such as dissatisfaction of customers and the external ones, such as general decline of economic activity. This work will help understand the potential, the application of a forecasting method can have, depending on the particular scenario.

\newpage

## %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%Theoretical Background and Research%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Design%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

## %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Theoretical Background and Research Design}

\subsection{Method Description: Exponential Smoothing}

Invented by a US Navy Operations Evaluation Group during World War II, ES became widely-used, mainly because it is simple, intuitive, and nevertheless accurate, as {\sc Makridakis} and {\sc Wheelwright (1987)} conclude. As the inventor accurately remarks, the ES model is a mathematical interpretation of a learning process, similar to the one of a human brain: the long-term estimate of the average is being subsequently corrected by the experience of new observations, cf {\sc Brown} and {\sc Richard F. Meyer (1960)}.

The forecast for the subsequent period $t$ is being computed according to a linear combination of the demand realization $d$ of the previous period $t-1$, and the forecast $\hat{d}$ for the previous period. The smoothing parameter $\alpha$ determines the magnitude of the consideration of the most recent realization of the demand. $\alpha$ is normally chosen between 0 and 1, while $\alpha$ values between 0.1 and 0.3 are common. Thus, the forecast is being updated as follows:

\begin{align}

d_{t} ~ = ~ \alpha d_{t-1}+(1-\alpha) \hat{d}_{t-1}.

\end{align} The choice of the smoothing parameter $\alpha$ is crucial to the accuracy of the forecast. The higher its value, the stronger the estimations react to the variations in demand, the lower it is, the more does the forecast represent the overall average of the data set. The main distinctive feature of ES, as compared to other extrapolative techniques, like the moving average is the fact, that every single realization that happened in the past is being considered for the subsequent measurement.

Incidentally, the only difference of ES and the Moving Average method is the weighting of previous observations. Recursive resolution of (1) produces the following geometric row:

\begin{align} \hat{d}_{t} = \alpha d_{t-1} + \alpha (1- \alpha ) d_{t-2} + ... + \alpha (1- \alpha )^{t-1} d_1 + (1- \alpha )^t d_0

\end{align} The resolution provides the evidence for the name of the method, since the weighting of the observations is exponential: the latest realization obtains the strongest weight, the following weights decrease exponentially. The sum of the weights equals one(compare {\sc Schulte, 2001}):

\begin{align}

\alpha \sum_{i=0}^{\infty}(1- \alpha)^{i} ~ = ~ 1.

\end{align}

Another challenge of the method is the choice of the initial forecast $d_{0}$. Often, the last observation can serve as $\hat{d}_{t-1}$, as {\sc R. G. Brown (1963)} proposes. However in case of data, that does not have a clear correlation of subsequent realizations, the recent observation can prove to be an outlier, distorting the the following computation. Therefore Sven {\sc Axsaeter (2006)} argues that the average of demand is a suitable estimation for $\hat{d}_{t-1}$, in case it can be estimated adequately.

Furthermore, this work concentrates on the model of ES that is applied to constant demand pattern, characterized by the following representation:

\begin{align}

d_{t} ~ = ~ \bar{d} + \epsilon_{t},

\end{align}, where $\bar{d}$ is average demand and $\epsilon_{t}$ represents the deviation of the realization from the average demand. The trend- and the seasonal adjusted models are not the subject of this work, which is caused by the pattern of the generated time series. (see Implementation in Excel)

\bigskip

As {\sc Corsten (1990)} points out, the dilemma of choosing the right $\alpha$-value constantly follows the ES user. Whereas small $\alpha$-values have a strong smoothing effect, since they do not allocate much weight to newest realization, high values rely on the persistence of the latest demand trend. Therefore different $\alpha$-values should be applied in different economic environment. Figures 1 and 2 summarize the reaction of the forecasts to a temporary demand impulse and to a significant change of demand level. They reveal the higher adaptability of high $\alpha$-values that result in a higher error of the period that follows the impulse, however, they also adapt to the low level much faster. The figure illustrates how the forecasts with a low $\alpha$-value have not adapted to the old level even after 20 realizations, which is due to their smoothing and transferring the value of the impulse over the entire planning horizon. Figure 2 demonstrates the high adaptability of the forecasts with a high $\alpha$-value to a new demand level.

\smallskip

It is important to note that none of the situation depicts the real demand of a forecasting environment solely. It is rather the more dominant of either a temporary impulse or a permanent level change that triggers a parameter choice. Thus, a realistic scenario includes both kind of demand changes. The more the realizations fluctuate in both directions, the more smoothing is necessary to make valid forecasts, and therefore the lower the value of $\alpha$ has to be. Especially during the second part of the spreadsheet analysis, the crucial difference in behavior of high and low $\alpha$-values will give us more insight into the applicability of the method to simulation problems. This essential for the ES-method behaviour of the forecasts provides us with a link between the basic attributes of exponential smoothing and the analysis provided in the following sections.

\begin{flushleft}

\begin{figure}[htbp]

\centering

\includegraphics[width=1.05\textwidth, height=0.45\textheight]{fig1.pdf}

\caption{ES with different $\alpha$-values in case of a sudden demand impulse}

\label{Figure 1}

\end{figure}

\begin{figure}[htbp]

\centering

\includegraphics[width=1.05\textwidth, height=0.45\textheight]{fig2.pdf}

\caption{ES with different $\alpha$-values in case of a change of demand level}

\label{Figure 2}

\end{figure}

\end{flushleft}

## %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%Analysis Procedure%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

## %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection{Analysis Procedure}

This work will apply the ES method to a partial problem of a supply chain simulation environment in order to examine whether it can be of value for a specific simulation setting. The introductory and the concluding remarks embed the provided analysis into the relevance of ES for practitioners as a production planning and forecasting method. The data set, provided by the simulation environment includes realized demand for 365 days in 5 different scenarios. These vary only in demand variability with standard deviation values of 0, 4, 8, 12 and 14. The average demand amounts to 12. Furthermore, the demand is clustered in 3 customer classes that are charged with their respective prices, namely 80.00, 90.00 and 100.00 \euro. Hereby the realization of the demand is generated by only one class and the probability for the realization of each class is $1/3$. In other words, each period one class triggers a demand realization, while the system chooses the class randomly. The demand of each class has the same distribution pattern.

\smallskip

During the spreadsheet analyses in EXCEL the parameters and the evaluation system will be chosen. Since the realized data allows us to compare different parameter values ex post, the error assessment is crucial to observe the performance of the ES method with a set of different $\alpha$ values. This will be done in a predefined range of $\alpha$ values by means of an iterative process. Subsequently, forecasts will be evaluated on the structural trends they reveal, including the reaction of the model to outliers. After a brief discussion of the method, according to which the performance will be evaluated, a suitable value of $\alpha$ will be chosen. Also, the applicability of ES for the given scenarios with different demand distributions will be analyzed.

\bigskip

The second part of the analysis will analyze different scenarios of a significant change in average demand. Several scenarios will be analyzed here. The parameter that will be varied here is the long-term level of demand. It is assumed that the decision-maker has a default option of using the old estimation of the average of the demand, which is 12. The spreadsheet analysis will help identify opportunities for ES to improve the forecasts, depending on the magnitude and the direction of the demand level change. The scenarios include new average values that are above and below the original ones. The second dimension, on the basis of which the performance of ES will be evaluated is the variance of demand, which will be kept in proportion with the new average demand, in order to allow for comparable evaluation of the forecasting method. Eventually, the method, identified to achieve best estimates, will be implemented in the simulation environment. Sensitivity analysis will help understand, how a marginal change in parameters of the chosen method ultimately changes the potential profits.

## %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%Implementation in Excel%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

## %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Implementation in Excel}

\subsection {Main Assumptions (distribution of demand etc)}

An interpretive analysis of the performance of the ES-method is only possibly if the demand data is generated according to the assumptions, exponential smoothing exhibits. Among these is the generation of data for the method evaluation. {\sc Brown} and {\sc Meyer, (1960)} argue that the generated time series have to be locally stationary and must ideally follow a function, such as a polynomial wave, with a slowly changing mean and variance. It is important to note that data, that is generated anew, having the same mean and distribution every period, cannot follow any function and would give little insight in the operating mode of ES, as a matter of principle. However, for the second part of the analysis the discrete values that are generated by the simulation environment feature a long-term change of mean, which represents a viable and common phenomenon in demand planning. The new mean is now a challenge for both the adaptable ES-method and the constant average that is chosen wrong a priori. The competition of the two forecasting approaches is the central discussion of this work.

\smallskip

The reference point, against which the forecasting accuracy will be measured is the prediction with the constant average value of 12. The generated demand follows a negative binomial distribution, as negative demand is not allowed, whereas positive demand is not limited up-bound. Also this demand distribution allows standard deviation values that exceed the average. The choice of the distribution indicates, that demand is above 0 and the realizations can only be discrete integer values. It is also important to note, that the data are generated in a way that allows general comparability of forecasting results. This requires subsequent variations in standard deviation of the data that are relatively equal to variations of the mean of the data set. For instance, In order to compare forecasting behavior of two data sets with a mean of $4$ and $6$ respectively, we provide these data sets with standard deviation values of $2$ and $3$ respectively. Hence, the ratio between mean and standard deviation remains equal and conclusion about applicability of a forecasting method can be drawn.

%Method evaluation%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection {method evaluation (error measurement)}

It is important to make sure, that the accuracy of the model is measured adequately. The model that we consider the most accurate must have the lowest aggregate error over the entire forecasting period. From the variety of error measurement methods {\sc Carbon} and {\sc Armstrong (1982)} found out that Mean Squared Error (MSE)\nomenclature[]{MSE}{Mean Squared Error} and Mean Average Percentage Error (MAPE)\nomenclature[]{MAPE}{Mean Average Percentage Error} are the two most popular techniques. However, examining the evaluation methods one should keep in mind the limited informational value of Mean both MSE and Mean Absolute Deviation (MAD)\nomenclature[]{MAD}{Mean Absolute Deviation}. MAD is measured by calculating the average of the deviations between forecasts and observations. MAD is not scale-adjusted, which makes it difficult to compare forecasting methods. MSE is a modification of MAD, with the single difference, that it estimates the squared deviation of the forecasts and the observations:

\begin{align}

\operatorname{MSE}\left[\hat{d}\right] ~ = ~ \operatorname{E}\left[(\hat{d}-d)^2\right].

\end{align}

In contrast to MSE, MAPE is not depending on scale and can allow more general conclusions on the accuracy of a method. It is calculated as follows:

\begin{align}

\mbox{MAPE} ~ = ~ \frac{1}{n}\sum_{t=1}^{n} \left| \frac{d_{t}-\hat{d}_{t}}{d_{t}}\right|,

\end{align}

where $d_{t}$ describes the observation and $\hat{d}_{t}$ the forecast. (compare {\sc Makridakis, Wheelwright, (1985)}

\smallskip

The operating modes of MSE and MAPE are inherently different and can support the selection of a forecasting method in different industries and business situations. With its assumption that loss function is exponential, MSE can be applied to certain settings, where a strong deviation of a prediction from actual demand would result in bottlenecks, thereby affecting production or storage of other products. Favorable for choosing MSE is also a situation where a customer is more sensitive to stronger shortages of supply and might even question maintaining business relations. Although MSE is the most popular error measurement method, it has been empirically identified by {\sc Armstrong} and {\sc Collopy (1992)} that its reliability is rather low, mostly for the reasons stated above.

\bigskip

In this work, we will adopt MAPE in order to assess the forecasts, since the exponential structure of costs that derive from the failure to satisfy demand is not apparent. Also MAPE would give us more comprehensive insights into the accuracy of the forecasts, while we modify the demand distribution, since it damps the effect of the outliers. However, {\sc Armstrong (1978)} argued rightly that MAPE has two major drawbacks, namely its asymmetric consideration of deviation below and above the realization, as well as the missing boundary for very high realizations that distort the results. To illustrate this behavior, let's take the average of $12$ that characterizes the demand in our model and consider demand realizations of $6$ and $20$, while the forecast stays at $12$. Using $(4)$, we obtain a single percentage error(PE)\nomenclature[]{PE}{Percentage Error} equal to $100$\%\footnote{ $\hat{d}_{t} = 12$, ${d}_{t} = 6$. $\mbox{PE} ~ = ~ \left| \frac{12-6}{6}\right| ~ = ~ 1 ~ = ~ 100\%$} for the realization of $6$ and an error equal to $40$\%\footnote{$\hat{d}_{t} = 12$, ${d}_{t} = 20$. $\mbox{PE} ~ = ~ \left| \frac{12-20}{20}\right| ~ = ~ 0.4 ~ = ~ 40\%$} for the realizazion of $20$. Thus, low realizations result in immense distortions of the overall picture. Especially when the generated data approaches $0$, MAPE produces errors that account for several hundred percent. Another problem of MAPE is its inability to measure the performance in case of a demand equal to $0$, since $0$ cannot stand in the denominator. Although the given data does not include realization values equal to $0$, the future use of the method in the simulation environment with differently generated data could reveal this drawback. A suggestion of {\sc Armstrong (1978)} was therefore to modify the MAPE formula as follows:

\begin{align}

\mbox{SMAPE} ~ = ~ \frac{1}{n}\sum_{t=1}^{n} ~ \frac{\left|d_{t}-\hat{d}_{t}\right|}{d_{t}+\hat{d}_{t}},

\end{align}

so that the errors lie in the range between 0\% and 100\%, and are less influenced by the outliers. Symmetric Mean Average Percentage Error(SMAPE)\nomenclature[]{SMAPE}{Symmetric Mean Average Percentage Error} will allow us to produce more accurate evaluation of the forecasting technique and to avoid assymetric treatment of errors that lie above and below the forecast.

\smallskip

In the second part of the Model, where we will analyze the implications of a change in demand level on the value of $\alpha$, we will also employ MAD in order to control the results of SMAPE. In contrast to MSE, MAD does not increase the weight of outliers by squaring them. However, since the method is not scaled, it cannot be used to compare sets of data with different mean values. MAD is calculated as follows:

\begin{align}

\mbox{MAD} ~ = ~ \frac{1}{n}\sum_{t=1}^{n} ~ {\left|d_{t}-\hat{d}_{t}\right|}

\end{align}

The control of SMAPE results can be important, since SMAPE can distort the optimization significantly, especially with certain data patterns. Although SMAPE is designed to treat lower and higher estimates equally, extreme realizations have a strong impact on the calculation of the optimal $\alpha$, as it will be shown in the next sections.

%Model Design I: Constant Average Demand Model%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection {Model Design I: Constant Average Demand Model }

Microsoft Excel offers us a useful add-in for exponential smoothing. The add-in includes the formulas and the only parameter the user can work with, is the value of $\alpha$ and the data input. Thus, the value of alpha has to be changed manually for every new computation. Additionally, Excel offers a calculation of a standard error, which is however not a suitable accuracy measurement, as will be explained in the following sections. Also, an optimization of the parameters by the means of a linear program is not possible. For these reasons the add-in function offers little help for an extensive evaluation of ES. Instead of using the predefined formulas, we will edit the formulas manually, in order to keep their adaptability to parameter changes.

\smallskip

The demand data stream generated by the simulation gives us an opportunity to use the average demand as the initialization of the forecasts. Especially for small $\alpha$ values it is important that this parameter is chosen adequately, since its weight for the first forecasts stays very high at $(1-\alpha)$, $((1-\alpha)(1-\alpha))$ etc. Due to the fact that average demand is known a priori to be $12$, we can use this value as an initial forecast. After the first several realizations the weight of the initial value decreases exponentially, so that even if the first estimation was not accurate, the impact on the overall performance is dwindling.

\smallskip

The time horizon of the forecast is 365 days, which allows us to make sufficiently certain statements about the reliability of the method and the application of different parameters. A shorter time range could seriously distort the method evaluation. Another relevant detail is the rounding of the forecasting values, since the goal of the model is to simulate real production and warehousing processes. The forecasts are therefore rounded to 0 positions after decimal point. The consequence of this modification is the insignificant lowering of the sensitivity of the model to the parameters.

\begin{table}[h]

\caption{SMAPE results for Single Exponential Smoothing

with different demand patterns and $\alpha$- values (Computed in Microsoft Excel 2007)}

\centering

{\renewcommand{\arraystretch}{1}

\renewcommand{\tabcolsep}{0.15cm}

\begin{tabular}{c | c | c c c c c c c}

\hline \hline

\rule{0pt}{3ex} \raisebox{-2,2ex}{Demand} & \multicolumn{8}{c}{SMAPE (for $\alpha$ values and the average) in \%} \\ [1ex] \cline{2-9}

\rule{0pt}{3ex} & average & $0.01$ & $0.05$ & $0.1$ & $0.15$ & $0.2$ & $0.25$ & $0.3$ \\ [1ex]

\toprule

\rule{0pt}{3ex}$\bar{d}=12$; $\sigma=0$ & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\

$\bar{d}=12$; $\sigma=4$ & 13.544 & 13.544 & 13.691 & 14.023 & 14.403 & 14.495 & 14.705 & 14.705 \\

$\bar{d}=12$; $\sigma=8$ & 28.281 & 27.928 & 28.063 & 28.563 & 28.985 & 29.511 & 29.871 & 30.361 \\

$~\bar{d}=12$; $\sigma=12$ & 39.025 & 39,563 & 40,086 & 40.426 & 40.939 & 41.224 & 41.521 & 41.705 \\

$~\bar{d}=12$; $\sigma=14$ & 44.391 & 44.696 & 45.004 & 45.274 & 45.701 & 46.124 & 46.515 & 46.618 \\ [1ex]

\bottomrule

\end{tabular}}

\end{table}

\smallskip

Table 1 illustrates, how the SMAPE value diminishes the lower the $\alpha$-value is. It is obvious that with standard deviation of 0, every value of $\alpha$ performs equally good, since every realization value equals 12 and does not change the subsequent forecast. Only in the case of $\bar{d}=12$ and $\sigma=8$, the $\alpha$-value equal to 0.01 seems to be more adequate. However, this value is an insignificant alteration of the average, since new observations are weighted with 0.01 and hardly change the subsequent forecasts. Especially with the rounding of the forecasts the influence of such a low $\alpha$-value becomes marginal.

\smallskip

Finding the optimal smoothing parameters is a procedure that has can be executed by statistical programs that generate grids of error values for different parameters and allow to sort these starting with the lowest accumulated error, as we did above. In this work we want to solve the problem of finding an adequate value for $\alpha$ using the SOLVER add-in of Microsoft Office, rather than using the iterative method. Solver is a tool offered by Microsoft EXCEL for linear and non-linear optimization. Excel SOLVER has one limitation, namely, it puts a limit of 200 on decision variables (also referred to as changing cells). For this reason, a SOLVER Extension, called Frontline's Premium Solver was employed in order to incorporate all the changing cells. The extension allows the number of decision variables to rise up to 2000.\footnote {The free trial version of the extension can be downloaded at http://www.solver.com/}

\smallskip

With the help of Frontline's Premium Solver the problem can be solved in 3 steps, analog to a standard linear program. First the objective is formulated, whereupon the decision variables have to be chosen and eventually the constraints are indicated.

\begin{enumerate}

\item The \emph{objective} of the SOLVER-Problem is to minimize the cumulated error, that has been chosen to describe the deviations.

\item In our case, the only \emph{variable} is $\alpha$. In this context is important to note that an $\alpha$ value of 1 would correspond to the naive forecasting technique, since it merely takes the latest observation into account, while predicting the subsequent realizations. The exponentially smoothed average, which is the forecast for the current period is weighted with $(\alpha-1)$, or in other words with $0$. The other extreme, namely an $\alpha$ value of 0 is identical with our benchmark, which is the average demand, since the forecasts do not change throughout the simulation run and stay at $d_{0}=12$.

It is also possible to use the SOLVER function, in order to find the initial forecast value $d_{0}$, which would minimize the subsequent errors. However, the information on the initial estimation would be of value only for the specific data set, so that new generated data would require a new estimation. In contrast to $\hat{d}_{0}$ the value of $\alpha$ can be transferred to new data sets with the same distribution properties. Therefore we abandon an optimization of the model initialization here.

\item The \emph{constraints} are the restrictions for the value of alpha that are defined in the description of the ES-method, namely $\alpha \in [0,1]$

\end{enumerate}

\bigskip\!

By specifying the error measurement method, which we use to determine the optimal $\alpha$, we can solve the optimization problem using the SOLVER function, as described above. As it is elaborately discussed in this section, SMAPE provides the most accurate evaluations of the method performance. Thus, it is intuitively clear, that the objective of the optimization is to minimize the value of SMAPE. One approach is to do it successively for each distribution pattern. To summarize the computational results in one table, SOLVER indicates the smallest alpha values in one table, by minimizing the sum of the SMAPE values for the distributions. In order to control the performance of SMAPE as an error measurement mechanism, other methods, such as MAD have been used to verify the results, stated above. Taking the average demand for an estimate was found to be the most precise forecasting method by all of them. This finding is not valid for the second part of the simulation, where certain demand patterns can lead to different cumulated error results, as explained in the following section.

%Model Design II: Scenario Model(changing level of demand%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection {Model Design II: Scenario Model(changing level of demand)}

The case is especially interesting since it gives us more insight into the operating mode of ES with different distributions and parameters. Also possible pitfalls a decision-maker can run into using an error measurement mechanism that is inappropriate for the given time series, can be identified here.

\smallskip

After having identified scenarios with changes in average demand and with differences in variance, we generated data according to the negative binomial distribution in order to fit our new demand assumptions. Three scenarios with average demand values of $4$, $6$ and $8$ that are below the original demand of $12$,as well as three demand levels above it, namely $16$, $18$ and $20$ were chosen here. The changes in the applicability of the ES forecasting method were monitored with regard to different standard deviations of demand. In doing so, the relevant standard deviation values for the data sets were calculated as percentage of the demand average, so that comparative conclusions are possible. An important assumption of the model was the negative binomial distribution that was chosen here again to represent the demand pattern.

\bigskip

The general idea behind the spreadsheet model is analog to the one employed in the first part of the analysis. Thus we followed the same computational procedure, that consists of 3 steps, which was described in the previous section. The optimization of the $\alpha$-value is performed by the SOLVER extension for each of the generated demand levels separately (for further technical details compare Appendix fig xxx). After we have concluded, that SMAPE is the optimal error measurement mechanism, it is nevertheless important to verify the results with a simpler method. In this case we chose MAD, since it doesn't increase the effect of outliers. Especially for demand patterns with high variance error measurement mechanisms, like MSE would distort the results of method evaluation. The criterion for the precise choice of $\alpha$ stays, however, SMAPE. The purpose of MAD is to merely reveal extreme differences in the parameter choice. Since MAD delivers an average error that is not scaled, it cannot be used to compare errors of demand distribution with different mean values, which is part of this scenario model.

\bigskip

Another crucial parameter here is the initialization value of the forecasts. We choose the old level of demand equal to $12$, since it resemble the base case of the forecasting. The lower the new $\alpha$-values, the better it is for the decision-maker to abstain from employing the ES method. In the extreme case, when $\alpha$ equals $0$, so that $12$ remains the best estimate throughout the planning window, any use of ES would result in less precise forecasts, than using the old level of demand.

%Computational results%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection {Computational results}

\subsubsection{Optimal Smoothing Parameters for Constant Demand Model}

\emph{Table 2} summarizes the results of the optimization with SOLVER. The report produced automatically by solver (compare App. figure 1) demonstrates the operating mode of the optimization. The total SMAPE value (Sum of MAPE-values of different data distribution streams) could be reduced from 153.57 to 124.89, as compared to the least adequate choice of $\alpha$, which equals $1$. The optimal $\alpha$ values that are in the cells $B1$, $E1$, $H1$, $N1$, $K1$ are either 0 or close to 0 (for further technical details compare Appendix fig xxx). As discussed above, $\alpha$-values of $0$ indicate that the initial value, that we have chosen to be the average stays remains the estimate throughout the simulation run, since new realizations flow into the forecasts with a weight of ($\alpha-1)=0$.

As expected, SMAPE values increase almost proportionally with the standard deviation of the data set. In the case of a standard deviation equal to $0$, the $\alpha$-value can be chosen arbitrarily, since new realizations remain identical with the mean, namely $12$. The other $\alpha^{\ast}$-values remain in the range of $0$ to $0.01$, which is uncommon for the ES method. A smoothing parameter that is in this range considers new realizations only marginally. Instead the time series are smoothed to an extent that the forecasts almost always represent the average. The result is comprehensive, since the high degree of smoothing can be explained by extreme fluctuation of demand that do not allow a smoothing constant to fit any development, or a link between subsequent periods. This demonstrates again the necessity of a function, generated demand ideally has to follow, in order to provide an optimal environment for the application of ES.

\begin{table}[pt]

\begin{threeparttable}[pt]

\caption{optimization of the $\alpha-$value with Microsoft Excel SOLVER\newline and SMAPE criterion

for Single Exponential Smoothing}

\centering

{\renewcommand{\arraystretch}{0.87}

\renewcommand{\tabcolsep}{0.18cm}

\begin{tabular}{l c c c c c c c}

\toprule

\toprule

\rule{0pt}{2ex} & & $\bar{d}=12$; $\sigma=0$ & &~~~~~~~~~~~~~& & $\bar{d}=12$; $\sigma=4$ & \\[1ex]

\rowcolor[gray]{.8} \rule{0pt}{2.5ex} & & $\alpha^{\ast} = 0.00$ & &~~~~~~~~~~~~~& & $\alpha^{\ast} = 0.009$ & \\

\rule{0pt}{3ex} \emph{t} & Demand & $\hat{d}_{t}$ & $PE_{t}$ &~~~~~~~~~~~~~& Demand & $\hat{d}_{t}$ & $PE_{t}$ \\

\hline

\rule{0pt}{3ex} $1$& $12$ & $12$ & $0.00$ &~~~~~~~~~~~~~& $12$ & $12$ & $0.00$ \\

\rule{0pt}{0ex} $2$& $12$ & $12$ & $0.00$ &~~~~~~~~~~~~~& $14$ & $12$ & $7.69$ \\

\rule{0pt}{0ex} $3$ & $12$ & $12$ & $0.00$ &~~~~~~~~~~~~~& $6$ & $12$ & $33.33$ \\

\rule{0pt}{0ex} $4$ & $12$ & $12$ & $0.00$ &~~~~~~~~~~~~~& $15$ & $12$ & $11.11$ \\

\rule{0pt}{0ex} $5$ & $12$ & $12$ & $0.00$ &~~~~~~~~~~~~~& $14$ & $12$ & $7.69$ \\

\rule{0pt}{0ex} $6$ & $12$ & $12$ & $0.00$ &~~~~~~~~~~~~~& $14$ & $12$ & $7.69$ \\

\rule{0pt}{0ex} $n$ & \ldots & \ldots & \ldots &~~~~~~~~~~~~~& \ldots &\ldots & \ldots \\

& & \multicolumn{2}{r}{\fbox{$SMAPE = 0.00$}}&~~~~~~~~~~~~~&& \multicolumn{2}{r}{\fbox{$SMAPE = 13.54$}} \\ [4ex]

\midrule[0.5pt]

\midrule[0.5pt]

\rule{0pt}{2ex} & & $\bar{d}=12$; $\sigma=8$ & &~~~~~~~~~~~~~& & $\bar{d}=12$; $\sigma=12$ & \\[1ex]

\rowcolor[gray]{.8} \rule{0pt}{2.5ex} & & $\alpha^{\ast} = 0.01$ & &~~~~~~~~~~~~~& & $\alpha^{\ast} = 0.00$ & \\

\rule{0pt}{3ex} \emph{t} & Demand & $\hat{d}_{t}$ & $PE_{t}$ &~~~~~~~~~~~~~& Demand & $\hat{d}_{t}$ & $PE_{t}$ \\

\hline

\rule{0pt}{3ex} $1$ & $4$ & $12$ & $50.00$ &~~~~~~~~~~~~~& $51$ & $12$ & $61.90$ \\

\rule{0pt}{0ex} $2$ & $8$ & $12$ & $20.00$ &~~~~~~~~~~~~~& $6$ & $12$ & $33.33$ \\

\rule{0pt}{0ex} $3$ & $21$ & $12$ & $27.27$ &~~~~~~~~~~~~~& $35$ & $12$ & $48.94$ \\

\rule{0pt}{0ex} $4$ & $8$ & $12$ & $20.00$ &~~~~~~~~~~~~~& $24$ & $12$ & $33.33$ \\

\rule{0pt}{0ex} $5$ & $3$ & $12$ & $60.00$ &~~~~~~~~~~~~~& $1$ & $12$ & $84.62$ \\

\rule{0pt}{0ex} $6$ & $25$ & $12$ & $35.14$ &~~~~~~~~~~~~~& $6$ & $12$ & $33.33$ \\

\rule{0pt}{0ex} $n$ & \ldots & \ldots & \ldots &~~~~~~~~~~~~~& \ldots &\ldots & \ldots \\

& & \multicolumn{2}{r}{\fbox{$SMAPE = 27.93$}}&~~~~~~~~~~~~~&& \multicolumn{2}{r}{\fbox{$SMAPE = 39.03$}} \\[4ex]

\midrule[0.5pt]

\midrule[0.5pt]

\rule{0pt}{2ex} & & $\bar{d}=12$; $\sigma=14$ & &~~~~~~~~~~~~~& & & \\[1ex]

\rowcolor[gray]{.8} \rule{0pt}{2.5ex} & & $\alpha^{\ast} = 0.00$ & &~~~~~~~~~~~~~& & & \\

\rule{0pt}{3ex} \emph{t} & Demand & $\hat{d}_{t}$ & $PE_{t}$ &~~~~~~~~~~~~~& && \\

\hline

\rule{0pt}{3ex} $1$ & $2$ & $12$ & $71.43$ &~~~~~~~~~~~~~& & & \\

\rule{0pt}{0ex} $2$ & $3$ & $12$ & $60.00$ &~~~~~~~~~~~~~& & & \\

\rule{0pt}{0ex} $3$ & $6$ & $12$ & $33.33$ &~~~~~~~~~~~~~& & & \\

\rule{0pt}{0ex} $4$ & $29$ & $12$ & $41.46$ &~~~~~~~~~~~~~& & & \\

\rule{0pt}{0ex} $5$ & $2$ & $12$ & $71.43$ &~~~~~~~~~~~~~& \multicolumn{2}{r}{\fbox{$Total~SMAPE = 124.89$}} & \\

\rule{0pt}{0ex} $6$ & $1$ & $12$ & $84.62$ &~~~~~~~~~~~~~& & & \\

\rule{0pt}{0ex} $n$ & \ldots & \ldots & \ldots &~~~~~~~~~~~~~& & & \\

& & \multicolumn{2}{r}{\fbox{$SMAPE = 44.39$}}&~~~~~~~~~~~~~&& & \\ [1ex]

\bottomrule

\bottomrule

\end{tabular}}

\begin{tablenotes}

$\bar{d}$ denotes the average demand of the planning window, \\

$\alpha^{\ast}$ denotes the smoothing parameter that minimizes the SMAPE value,\\

$\sigma$ denotes the standard deviation of the demand realizations in the planning window,\\

$PE_{t}$ denotes the percentage error of the period,\\

$\hat{d}_{t}$ denotes the forecast for the period \emph{t}.

\end{tablenotes}

\end{threeparttable}

\end{table}

\smallskip

\newpage

\subsubsection {Optimal Smoothing Parameters for the Scenario Model}

The calculations of the optimal $\alpha$ for the case, when the average demand rises or falls are summarized in \emph{Table 3}. One can derive following observations from the calculation results:

\begin{itemize}

\item[-] For demand distributions with a standard deviation equal to $0$ computed $\alpha$ values are constantly higher than $0.85$. This is caused by the rounding of the forecasting values. Here, the only value that needs to be corrected is the initial forecast, which is $12$. The other values stay at the new average and can be used as an optimal forecast for the subsequent period with any sufficiently high smoothing parameter.

\item[-] Scenarios with a lowering of the demand level seem to induce fundamentally different parameters than scenarios with higher demand levels. As a rule, optimal $\alpha$-values seem to be higher for an unexpected and long-lasting demand drop, rather than for an demand increase of an equal magnitude. This behavior can be observed at the comparable variance levels of $50$ and $100$ percent of the mean. The reason can be identified, once we examine the generated time series and observe that the room for deviations is larger, the higher the average of the demand level gets. Especially low average values, such $4$ are likely to infer low smoothing constants. It lies in the nature of the negative binomial distribution that realizations below and above the average are equally probably. According to this logic low demand levels have a very short range of integers that can be demand realizations. Thus, it is more probable that two subsequent values lie close to each other, especially is the fluctuation is low.

\item[-]The optimal $\alpha$-values also increase with the difference between the expected demand level and the level incurred. The effect is especially visible with the drop of demand level. Both observations can be explained by the fact that the more the new values deviate from the wrongly assumed base case, the less advantageous it is to take the non-valid average. Even with realizations that fluctuate significantly, exponentially smoothed values seem to produce preciser predictions than the incorrect average. With values around $0.2$, they still consider the initial value of $12$ to a certain degree in the very beginning of the simulation. However the weight of the initial forecast becomes negligible after several realizations, in contrast to extremely low $\alpha$-values that increase the influence of the initialization tremendously.

\item[-] SMAPE and MAD generally produce very similar results. The deviation of the optimal $\alpha$ values lies mostly in the range of a few hundredths. As we have concluded above, the SMAPE is the most precise error measurement mechanism that, however, can still mislead the user, when it is applied for some demand patterns. For instance the differences in SMAPE- and MAD results for the demand level of $4$ are obvious. Analog, for demand distributions with high fluctuations, such as data sets with a $\sigma$-value equal to 150\% of the mean SMAPE produces $\alpha$-values that are consistently close to $1$, whereas evaluation with MAD results in a low smoothing parameter. The reason for such a strong deviation is the mechanism of error calculation that SMAPE employs. It smoothes high errors by limiting the percentage error to 100\%, whereas MAD does not scale it. Thus, if we employ the results produced by MAD, we obtain errors every period, but the cumulated error will be smaller than the one produced by SMAPE. SMAPE prefers to have a smaller number of periods with any deviation of forecast and realization and it reduces the impact of outliers upbound. Exactly this environment is given by the demand sets with high fluctuations, where many realizations are between 0 an 4 and a high number of outliers upbound occur. Here SMAPE criterion takes a high $\alpha$-value and ignores the fact that the errors in case of outliers are extremely high, since they are compensated by subsequent realiazations that are below the average. Here it is often advantages to use the realization of the previous period as a forecast, since only 3 values, namely $1$, $2$ and $3$ are possible.