This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Unemployment can be define as one of the economic factor that shows people whom are willing or able to work but they are unable to get a job due to many factors, e.g. lack of work opportunities. However, bear in mind that the term 'economically inactive' is different compare to unemployment. Economically inactive means that people that are physically able to work, but they do not want to work, and this does not count as unemployment.
Generally, high level of unemployment shows that the economy is in a struggle stage, this is because it shows that there are high amount of labor supply, but only few demand from the employers. Moreover, when there is a high unemployment within the economy, it mostly represent that the economic resources is not using in the best efficient way.
Furthermore, unemployment also has a great impact towards the society. People who are unable to find work must rely on some kind of benefits for income, for example family or government, and this can make life so much difficult. Additionally, there are studies and researches show that there is a relationship between unemployment and crime. And this issue has to be addressed.
It is extremely hard to define the cause of unemployment. There are few types of unemployment. Cyclical unemployment generally refers to unemployment that has a link with the rises and falls with business cycle. Secondly, the structural unemployment which is there is a major change in either economy or the labour market, that the job tasks does not fit with the labor's skill. And with regard to frictional unemployment, it generally defines as a phenomenon that people moving between jobs. The last but not the least, seasonal unemployment refers to seasonal job like farm work and 'Easter egg factory'.
In this project, the main objective is to estimate the Unemployment rate of United Kingdom for the end of the 2010 by using the Unemployment rate data in between 1990 to 2009 from European central bank. The data of standardised unemployment rate has included the total of male and female and the all ages range.
UK Unemployment Back Ground Information and brief history
The unemployment history of the UK is related to the economic and social history of the country.
The 'postwar boom', it happened in the 1950s and 1960s as a result of a very low rate of unemployment. During the Second World War, the servicemen had guaranteed that gaining full employment and no government will intervene this assurance. As technology within the world is evolving, a more stable international trade environment was created, the Keynesian economics and the stability of the Philips Curve was a great achievement which postulated a relationship between high inflation and low unemployment. Both have created a lot of job opportunities but on the other hand in that time, most women were remained defined as "economically inactive'.
Unfortunately in the 1970s, the economy collapsed due to the energy crises, they happened in 1973 and 1979 which both created ' stagflation', meaning inflation and unemployment both rise, as a result the Philips Curve broke down and would not able to help at that time. In other European countries, such as Germany, the fixed exchange rate pegged them and forcing EU members to deflate their economies to keep pace to the low inflation. The unsuccessful of the labour market reformed another proposal in the late 1960s, named as 'In Place of Strife' which has brought to a situation where union power was increasingly stifling markets by keeping wages high. In 1972, the unemployment had grown to a record of one million for the first time in history. During 1979, "Winter of Discontent" happened, 1.1 million of unemployment was formed because of the frozen pay of the employee at that time, and the Conservatives swept to power on the message that "Labour isn't working".
In the early 1980s, unemployment keep rising, it headed 3 million in 1982. The figure of 3.070.621 represented 12.5 percent of the working population in January 1982, the percentage was even higher in some different parts of the country. In Northern Ireland, the unemployment number remained at 20%, in the meantime, some areas had even higher percentage of unemployment because they was dominated by the declining industries such as coal mining.
Unemployment fell the most in the 1990s, it fell below 1 million since 1975 in March 2001 for the first time. The reasons of this achievement are being questioned. The conservatives disagreed that the Labour government left a "golden economic legacy" by the outgoing Major administration so that it had sat on an 8 year global grow according the Tory economics plans. Labour, in contrast, the figures proves everything by looking at its successful economics management and reforms (especially the independence given to the Bank of England).
*The extensive concern it put to skills and education, also the impact of its New Deal program they have made to reduce joblessness.
Why Choosing ARIMA model for forecasting
Now, we will start discuss the advantage and disadvantage of using ARIMA model. Firstly, the model can obtain a high level of capacity of time series data in order to perform forecasting. Secondly, it could remove uncertainty in multivariate model and problems can be avoided.
Disadvantage of ARIMA model are as below:
ARIMA model are basically looking backwards, poor at predicting in turning point
Economic significance is not clear, since economic theory and structure are not applied in the model.
Parameters are assumed as constant
Only short - term forecasting is accurate, and being recommend inputting at least 50 observations data in order to calculate an accurate prediction.
Since we can see that disadvantages are more than advantages, but according to Stockton and Glassman (1987) and Litterman (1986) "ARIMA models frequently outperform more sophisticated structural models in terms of short-run forecasting ability". This shows that the model reliability of forecasting in short run is proven and confirmed and since our project is predicting the UK unemployment in the rest three quarters of Year 2010. The model provides us confidence to predict an accurate result.
Why Unemployment rate forecasting is Important?
"Business, more than any other occupation, is a continual dealing with the future; it is continual calculation, an instinctive exercise in foresight" (Henry R. Luce 1967).
Forecasting is always an important function for decision makers such as (Government) to make decision makings in order to reduce risk and uncertainties. In the field of UK working populations, unemployment rate stands high up and keep increasing up to almost 8% in the working population. Government use forecasting to determine the most suitable method such as (increasing or decreasing interest rate, Export or Import) in order to reduce unemployment and to archive economy balance of the society. A single mistake of decision making would lead to economics failure and recessions (Examples: Big recessions in UK 1991). That is the reason why forecasting is important not only in unemployment but as well as the whole Economics Body.
Other Forecasting Results
Data Sources provide by: The Market Oracle 15/10/2008 (http://www.marketoracle.co.uk/Article6812.html)
As the forecasting graph provided by The Market Oracle showing as above, the unemployment rate is measured as 1.79 millions in 2008 and potentially raising up to 2.6 million during the next 18 months to April 2010. "The news surprised journalists and academic economists that went scurrying to further revise forecasts that barely a few months ago were calling for UK unemployment to hit between 3 and 3.4 million by the time of the next general election"(The Market Oracle Journal 20/01/2010).
Referring to the data from European Central Bank, the unemployment population is 2.458 million and unemployment rate is 7.7% which is relatively lower than the predict value. The signal shows that the unemployment rate is stabilized, and expected to decline to strong hold the economy of UK. The falling percentage of unemployment rate may due to the flexibility of UK workers, since they may accept lower wages and shorter working time jobs.
Data Sources Description and analysis
This aim of this project is to forecast the United Kingdom unemployment until the last quarter of 2010. Thus the data are collected from European Central bank statistical ware house (Statistical Office of the European Commission (Eurostat) 2010/4/12) which is trustworthy and reliable for my test. The data includes both male and female of all age range. Furthermore, the data is seasonal adjusted since this rate is affected by seasonal influence, the real trend of the unemployment rate will be observable after seasonal elements are cleared. Additionally, the data have been converted into a set of time series data in order to observe the sequence over time.
Data Period length: Year 1990 first quarter - Year 2010 first quarter
Static and dynamic properties of the data
Figure 1 The actual plot from STATA of UK unemployment rate (1990-2010)
Figure2 The Plot from Central European Bank of UK unemployment rate (1980-2010)
From Figure 2, we can see that there is a rapidly increase of unemployment in 1991. The economy fall into recession dues to inflations, British pounds become weak in exchange market, government figure out the best way to protect British pounds from continuous falling is to increase the interest rate. Effectively, the increasing of the interest rate slow down the inflation, but the over enlarging of interest rate leads a huge increase of mortgage cost. Due to the increasing of mortgage, house prices fall which affected the spending and leads to recessions (T.Pettinger). The unemployment populations reached just under 3 millions and the rate peaks at 10.39% in Feb 1993.
The UK unemployment rate started to drop and remain stable from approx 4.5 - 6% in the year from 2000 - 2007. But in 2008 Unemployment rate rises again due to European credit crunch, the reduction of credits has made banks to reduce mortgage loans which again affecting the consumers' spending and investment(T Pettinger). The unemployment rate is expected to rise also due to lack export and international trade.
Stationary Graphic analyse
Figure 3 The plot of showing autocorrelation of UK unemployment rate data
By observing the data plot of autocorrelation of UK unemployment rate, we can identify if our data is stationary or not. According to Box, Jenkins and Reinsell (1994, p.23), they indicated that "The data must be roughly horizontal along the time axis. Data fluctuate around a constant mean, independent of time, and the variance of the fluctuation remains essentially constant over time ''. It shows that if the statistical properties are stationary should be independent and data does not related to the pervious data, no repeated patterns and also similarities between observations. From the plot, we can provisionally decide the hypothesis of stationarity of the data, since wavy patterns are observed and autocorrelation are existed. For providing a more accurate result we will move on to Augment Dickey-Fuller Unit root test to test for stationarity.
In this chapter we will establish the research methods that would be used in order to achieve the established aim and objectives as well as trying to reveal the validity and dependability of the project. The first section of the chapter will discuss a variety of concepts about stationary data and the flow of The Box-Jenkins methodology for ARIMA model. In the second section of the chapter, we will discuss the methods of single exponential smoothing for forecasting.
The Figure below will show us the flow of the Box-Jenkins Methodology for time series modelling.
An Introduction for ARIMA Model
Unlike other forecasting methods, ARIMA forecasting model do not based on economics structure and Econometrics knowledge, they assumed that the past time series data plus error contain enough information for forecasting. ARIMA models stands for Autoregressive, Integrated and Moving Average. This model is well known for time series analysing and predict future point after Box. Jenkins (1970) effectively combined the behaviour and information of each component in the model. Furthermore the model can also be applied into non-stationary data, since the model provide initial method for differencing the data which is the referring to the integrated part. Firstly, we will examine the main core of ARIMA model.
Figur6 Box-Jenkins Methodology for ARIMA Flow (Forecasting 3rd Edition)
The steps above in the graph perform the way of how to use ARIMA model for forecasting our result.
1.1 Identification the general aspect of time series data
Referring to Forecasting 3rd Edition P. 82 (S. Makridakis) ''data = pattern + error = f (trend-cycle, seasonality, error)''. In most time series analysis, we assumed that all data consist of a systematic pattern and random noise which usually makes the pattern difficult to identify. Usually, time series analysis method engages a progression of filtering in order to observe the real sequence of data.
Most of the time series pattern can be separate into the trend or seasonality. Trend pattern is a long term that increase or decreasing without affected by calendar effects. For examples: Business indicators, oil productions and electricity productions. Seasonality appears as if the data consist seasonal factors, such as: (quarter of the year, month, weekend or days) they are always defined as calendar effect. Data appears to be significantly high or low with a steady direction and fortunate approximately the same in every year.
After, the data must be stabilized, autoregressive model could be effectively coupled with moving average to form a general and useful class of time series model (Forecasting 3rd edition S. Makridakis). But they can only be used under the data which is stationary, thus the data must be transformed into time series and stationary. If the plot shows that if the data distribute around the constant mean and drop down near to the zero, stationary should be implied. If the data violates to distribute around the constant mean and drop down near to the zero, we can conclude that the data as non stationary. For a further accurate result of data stationary we can apply ADF unit root test
Augmented Dickey-Fuller (ADF) unit-root test
ADF unit root test is the most popular test for testing non-stationary time series sample data. The test will keep increasing the lag difference term of the dependent variable in order to encircle and take control of the correlation. The significance point of ADF unit root test is determined by the P value, the critical value must be in the range of 0.001Â â‰¤Â PÂ â‰¤Â 0.999. The lower the P value, the more significance of stationary and which is based on 1%, 5% and 10% significance level (Mackinnon 1991).
Figure 3 ADF Test 1
From the test showing as above, the P value is relatively high, showing the value is (0.9025) which indicated that the UK unemployment rate data is non-stationary. As usual, if P value is significance, the value should be lower than 0.05. Now we examine that the time series data is non-stationary, the next step we will perform how to remove non-stationary in time series data.
Differencing is always the best method to remove non-stationarity. Referring to our test book Forecasting 3rd Edition (S. Makridakis), it indicated that '' Trends, or other non-stationary patterns in the level of a series, result in positive autocorrelations that dominate the autocorrelations diagram.'' After differencing the data, ADF test is used again to test for stationarity of the UK unemployment data, the results are as below:
Figure4 ADF test2
As figure 5, According to the data, 2nd level of differencing is not required since we observe that p-value is relatively significance after the data is being differenced. The value is 0.0222 which fulfil the significance level (lower than 0.05).
Now it is clear that the UK unemployment rate after 1st order of differencation is stationary we can approach to the next step of forecasting, model selection for ARIMA model.
1.2 Model Selection
After that, we need to identify if there is any seasonality or not. Since seasonality consist a large partial autocorrelation coefficient at the seasonal lag which is affecting the data stationarity. Plotting both Autocorrelation correlogram (ACF) and Partial autocorrelations (PACF) graphs are always useful to consider whether the data is autocorrelation or not. In ACF plot, it shows the result of the data which have been placed into the autocorrelation function and displays graphically, we could identify if the data signal is in repeating patterns. Referring to (Box & Jenkins, 1976; see also McDowall, McCleary, Meidinger, & Hay, 1980), "PACF is an extension of autocorrelation, where the dependence on the intermediate elements (those within the lag) is removed. In other words the partial autocorrelation is similar to autocorrelation, except that when calculating it, the (auto) correlations with all the elements within the lag are partialled out". It is basically identical to autocorrelation correlogram but PACF provide a better picture of sequential dependencies for lags. As below, we ACF and PACF graph for UK Unemployment rate is plotted.
Figure5: ACF Plot of UK U.rate Figure6: PACF Plot of UK U.rate
Referring to the figure 5 and 6 of UK unemployment rate data, it shows that the series does not look like a white noise model. We can identify that autocorrelations appears in the data, since the data decreasing slowly, on the other hand the number of lags increase. Additionally, in ACF plot shows that lag 1, 2, 3 are spike outside the 95% limits. According to the ADF unit root test show as above in part 1.1, we double confirm that the data is non-stationary. But after the data is differenced, ADF unit root test showing a significance result of data stationarity reporting that the P-value equals to 0.0222.
Figure7: ACF Plot of D. Urate Figure8: PACF Plot of D. Urate
According to figure 7 and 8 show the ACF and PACF plot after the UK unemployment data are 1st order differenced. In the ACF plot, we can see that there is a mixture of an exponential decay and sine-wave. But both figures show that differentiation has removed the autocorrelation from the data. In both figures, lags disturbed randomly without patterns since random variables are now uncorrelated with the previous period which also appears like to a white noise model. Additionally, there are no lag reaching outside the 95% limits which is a good sign to represent the data is now stationary and we can move on to the next step.
1.3 Autoregressive moving average model (ARMA)
Referring to the model provided by Box-Jenkins (1976), there are three types of parameters in the model, which is AR: (p) a form of = order of the autoregressive parameters , I (d) = degree of Intergraded difference order and MA: (q) a form of = moving average order and the models is shortened as ARIMA (p,d,q). For example : ARIMA (0,1,0) which means that it is a random walk model, thus there is no MA or AR involved and the series was differenced once. For seasonal models, seasonal parts will be added behind the non stationary parts of the data. For example: ARIMA (p,d,q) (P,D,Q)s
The combined form of the non-seasonal model can be written as this:
As our unemployment rate data from European Central bank is "seasonally adjusted"which means that the data has been modified to avoid the effects of seasonal variations and we can ignore the difference of non seasonal part of the model. The ARIMA model would be approximately in this form ARIMA (p, 1, d), ARMA parameters identification would be depending on ACF and PACF plot.
According to Figure 6, we can provisionally identify the ARIMA model is (1,1,0). In ACF plot of D. urate, the exponential decay are on the positive side, which 1 is (0.79) greater than zero. Additionally, referring to PACF plot of D.urate, lag 1 spike up significantly then cut off to zero. There are a few lags expose out of the range of +0.2 and -0.2 which is acceptable since there are error component in the data. Lag 1 which close to the significance critical value dominants the autocorrelation and partial autocorrelation plot results for ARIMA model.
2.1 Estimating the parameters
After identifying the parameters is or non seasonal, the next approach is to estimate if the parameters are suitable in the potential model. Referring to (Forecasting 3rd Edition S. Makridakis) suggested that "Computer programs for fitting ARIMA models will automatically find appropriate the initial estimates of the parameters and then successively refine them until the optimum values of parameters are found". Our regression tool STATA 11 will be able to run the regressions in order to find the optimal value. The P - value is always being considered since they represent the significant level of the test. P value is computed by the two sides of z value from the normal probability table, If the value is small (>0.05), that means the parameter is highly significant and which tells the chosen method is correct.
Figure 9 ARIMA model regressions (STATA 11)
According to figure 9, STATA 11 computes an ARIMA regression test of our forecasting model ARIMA (1, 1, 0). we can observe that the probability of a normal variable is larger than 12.05 or less than -12.05 is equals to zero which means that the AR(1) parameter is highly significant. UK unemployment rate is our constant terms which P-vale is relatively high (0.855). The constant p-value can be ignores, since it does not affect the forecasting result.
2.2 Diagnostic Check
Diagnostic checking is an important step for ARIMA model forecasting, since residuals (errors) are spread across the time series data. Residuals are assumed that they are not correlated and simply white noise. Therefore, diagnosing the ACF plot, PACF plot and white noise test play an important role before predicting the forecasting.
Figure 10 ACF Plot of Residual Figure 11 PACF Plot of Residual
From the graph, the residuals seem not being standardized, there are approximately 3-4 AC and PAC lags expose in the data due to error component consists in the data. Performing a portmanteau white noise test is the best way to identify the plot rather than observing by eyes.
Figure 12 Portmanteau white noise test result
The portmanteau white noise test shows that the Q value is (159.5157) which is relatively high, the larger amount of Q has given a higher evidence to reject the null hypothesis of white noise. P- Value has proven to reject the white noise since it is equal to zero. The evidence has proven that the residual is without white noise. Now we can perform to the final step of ARIMA model forecasting.
Following the two steps as above, we indicates that our predict model is ARIMA (1, 1, 0). Which the predict function is described as:
Referring to the formula, the Y value is first predicted value which is the 2nd quarter of unemployment rate in 2010. t is the residual for time and which represent lag.
Sata11 compute the ARIMA(1,1,0) forecast as above:
Figure12: ARIMA (1,1,0) model forecasting result from STATA 11
Figure 13: UK unemployment 2010 forecast plot
According to figure 12, it shows that the forecast for the next 3 quarters in 2010 of unemployment rate along with 95% prediction interval for each forecast. These data are hardly to calculate by hand since as we forecast for a longer period time, the prediction intervals would enlarge relatively. As the predict values shown, the rate in the following 3 quarters drops to 7.53, 7.47, and 7.42 which shows that there is a tendency of decreasing of unemployment rate.
Limitations of ARIMA model forecasting
There are a several reasons which affect the accuracy of ARIMA model of forecasting. Firstly, uncertainties are obtained in the parameters, data maybe omitted during the collecting process. Also referring to figure 8 (PACF plot, after the data is differenced), a several lags are still exposited between the +0.2 and -0.2 autocorrelation level, it shows the data obtains uncertainties and errors which affects the efficiency of forecasting. Furthermore, ARIMA model is a "backward stepping" model of forecasting, which perdition are used by the pervious data pattern without any economics theory or structure, insufficient data makes the prediction difficult in turning point. Additionally, the forecasting result is optimal if the model assumption is true. However, earth turns in every seconds, real data and statistics may change in our period of forecasting.
Rather than standing still at this forecasting result, the best way to access the accuracy of the model, we may compare it with another forecasting model by the performance of their mean square error (MSE). Mean square error is a risk function which measures the average of squared error which defined as this:
Single exponential smoothing method for forecasting
Since the result predicted by ARIMA model may not be accurate, single exponential smoothing method for forecasting is chosen to compare.
Single exponential smoothing popular method for predicting short-run forecasting in times series. Referring to (Forecasting 3rd edition) "The method is based on weighted average of the past data for forecasting and implies that the exponentially decreasing weight as the observation get older which is identify as exponential smoothing procedures". With a simply method, we can say that the older observations are given less weight in forecasting than recent observations. The forecasting function will be as below:
Referring to the function, St+1 equals to the forecast result based on weight the most recent observations, Î± represent the error adjustment occurred in the last forecast whereÎ±is a constant between 0 and 1, Î±yt is the weighted adjusted error which is weighting the most recent forecast.