This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers.

### Chapter 3 - Research Design

This chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models.

### 3.1. Data

The data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the S&P 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods: the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007.

The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is that the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time.

### 3.1.1. FTSE 100 index

The FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator.

In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009.

### 3.1.2. S&P 500 index

The S&P 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United States. The stocks listed on the S&P 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the S&P 500 is the most widely followed index of large-cap American stocks. The S&P 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy.

Similar to the FTSE 100, the data for the S&P 500 is also observed during the same period with 1775 observations (1775 working days).

### 3.2. Data Analysis

For the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics.

### 3.2.1. Assumptions

### 3.2.1.1. Normality assumption

### Normal distribution

As mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution.

Figure 3.1: Standard Normal Distribution

### Skewness

The skewness is a measure of asymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns.

Figure 3.2: Plot of a positive or negative skew

### Kurtosis

The kurtosis measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of data’s variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3.

Figure 3.3: General forms of Kurtosis

### Jarque-Bera Statistic

In statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined as:

where n is the number of observations, S is the sample skewness, K is the sample kurtosis. For large sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom.

### Augmented Dickey–Fuller Statistic

Augmented Dickey–Fuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values: (1%) –3.4334, (5%) –2.8627, (10%) –2.5674.

### 3.2.1.2. Homoscedasticity assumption

Homoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable.

Figure 3.4: Plot of Homoscedasticity

Unfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time.

### 3.2.1.3. Stationarity assumption

According to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them.

One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,…, and any time interval the joint distribution of the returns ,…, is the same as the joint distribution of returns ,…,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return.

### 3.2.1.4. Serial independence assumption

There are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice - once in its original form and once lagged one or more time periods.

The results can range from +1 to -1. An autocorrelation of +1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series).

In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the "overall" randomness based on a number of lags.

The Ljung-Box test can be defined as:

where n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the α is the quantile of the Chi-square distribution with h degrees of freedom.

### 3.2.2. Data Characteristics

Table 3.1 gives the descriptive statistics for the FTSE 100 and the S&P 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives: Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the S&P 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009.

Table 3.1: Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and S&P 500 index between 05/06/2002 and 22/6/2009.

DIAGNOSTICS

S&P 500

FTSE 100

Number of observations

1774

1781

Largest return

10.96%

9.38%

Smallest return

-9.47%

-9.26%

Mean return

-0.0001

-0.0001

Variance

0.0002

0.0002

Standard Deviation

0.0144

0.0141

Skewness

-0.1267

-0.0978

Excess Kurtosis

9.2431

7.0322

Jarque-Bera

694.485***

2298.153***

Augmented Dickey-Fuller (ADF) 2

-37.6418

-45.5849

Q(12)

20.0983*

Autocorre: 0.04

93.3161***

Autocorre: 0.03

Q2 (12)

1348.2***

Autocorre: 0.28

1536.6***

Autocorre: 0.25

The ratio of SD/mean

144

141

Note: 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively.

2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158

Figure 3.5a: The FTSE 100 daily returns from 05/06/2002 to 22/06/2009

Figure 3.5b: The S&P 500 daily returns from 05/06/2002 to 22/06/2009

Figure 3.6a: The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009

Figure 3.6b: The S&P 500 daily closing prices from 05/06/2002 to 22/06/2009

Figure 3.7a: Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009

Figure 3.7b: Histogram showing the S&P 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009

Figure 3.8a: Diagram showing the FTSE 100’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009

Figure 3.8b: Diagram showing the S&P 500’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009

The Table 3.1 shows that the FTSE 100 and the S&P 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and S&P 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates.

Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the S&P 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail).

Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the S&P 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present.

The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics: volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts; particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009.

Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility).

In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alternative hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the S&P 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary.

Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the S&P 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence.

Figure 3.9a: Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009.

Figure 3.9b: Autocorrelations of the S&P 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009.

Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the S&P 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can write:

Corr(Rt+1,Rt+1-λ) ≈ 0, for λ = 1,2,3…, 100

Therefore, returns are almost impossible to predict from their own past.

One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the S&P 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags.

Corr(R2t+1,R2t+1-λ) > 0, for λ = 1,2,3…, 100

Figure 3.10a: Autocorrelations of the FTSE 100 squared daily returns

Figure 3.10b: Autocorrelations of the S&P 500 squared daily returns

### 3.3. Calculation of Value At Risk

The section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the returns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution.

Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data.

The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns.

Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4.

### 3.3.1. Components of VaR measures

Throughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent.

The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the S&P 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the current global financial crisis from the beginning in August 2007.

### 3.3.2. Calculation of VaR

### 3.3.2.1. Non-parametric approach - Historical Simulation

As mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section.

The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days.

In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the S&P 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes.

After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns.

For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The S&P 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively.

Figure 3.11a: Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007

Figure 3.11b: Histogram of daily returns of S&P 500 between 05/06/2002 and 31/07/2007

Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The question is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4.

### 3.3.2.2. Parametric approaches under the normal distributional assumption of returns

This section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4.

### 3.3.2.2.1. The RiskMetrics

Comparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations; instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor λ=0.94 (the RiskMetrics system suggested using λ=0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly.

After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV.

### 3.3.2.2.2. The Normal-GARCH(1,1) model

For GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below).

Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the S&P 500

Normal-GARCH(1,1)*

Parameters

FTSE 100

S&P 500

0.0955952

0.0555244

0.8907231

0.9289999

0.0000012

0.0000011

+

0.9863183

0.9845243

Number of Observations

1304

1297

Log likelihood

4401.63

4386.964

* Note: In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%.

According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of ‘old’ news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 – 0.93), indicating a long memory in the variance.

The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the S&P 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the S&P 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly.

After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV.

### 3.3.2.2.3. The Student-t GARCH(1,1) model

Different from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of maximum likelihood estimation and obtained by the STATA (see Table 3.3).

Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the S&P 500

Student-t GARCH(1,1)*

Parameters

FTSE 100

S&P 500

0.0926120

0.0569293

0.8946485

0.9354794

0.0000011

0.0000006

+

0.9872605

0.9924087

Number of Observations

1304

1297

Log likelihood

4406.50

4399.24

* Note: In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%.

The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of ‘old’ news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model.

### 3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion technique

The section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4.

### 3.3.2.3.1. The CFE-modified RiskMetrics

Similar to calculating the normal-RiskMetrics, we first work out the daily RiskMetrics variance for both the indexes and subsequently measure the VaRs for the forecasting period under different confidence levels of 99%, 97.5% and 95% based on the normal-VaR formula (2.6). However, in this stage, we will replace the critical z-value from the standard distribution by the z-value modified by the CFE (see the formula (2.12)), whereis the non-normal skewness andis the excess kurtosis of the empirical distribution of the parameter estimated period. From the formula (2.6) and (2.12), we intuitively see that both VaRs, including the normal-VaR and the CFE-modified normal-VaR are proportional to volatility (standard deviation) and the only difference between the two VaRs lies in the weighting of the standard deviation.

### 3.3.2.3.2. The CFE-modified Normal-GARCH(1,1) model

Although as maintained in the literature review, the GARCH family models in general and the simple symmetric normal-GARCH(1,1) model can catch the fat tails and volatility clustering which often occur in financial time series data, the method still assumes returns are normally distributed. In this sense, we again employ the Cornish-Fisher Expansion technique to accommodate non-normal skewness and excess kurtosis from the assumption of the normal distribution to appreciably make up for fatter tails.

Resembling the estimate of the normal-GARCH(1,1), we initially compute the daily normal-GARCH(1,1) variance for both the indexes based on the parameters approximated (see Table 3.2) and next measure the VaRs for the forecasting period by replacing the critical z-value from the normal distribution by the CFE-modified z-value.

### 3.3.2.3.3. The CFE-modified Student-t GARCH(1,1) model

As discussed in the chapter 2, the Student-t GARCH(1,1) model differs from the Normal-GARCH(1,1) model just in relation to the distributional assumption of the errors (or residuals). Accordingly, the Student-t GARCH(1,1) approach and the Normal-GARCH(1,1) approach assume that the volatility follows the student distribution and the normal distribution, respectively. However, in terms of the returns distribution, these do not mean that the returns distribution is also similar to the distributional assumption of volatility.

Indeed, both models can completely be under several distributional assumptions of returns (Normal, Student-t, Skewed Student-t). Since the section measures VaRs based on the standard distributional assumption of returns, the stage calculates the Student-t GARCH(1,1) model under the standard distribution modified by the CFE technique.

In order to execute the model, it is significantly clear seeing that the only difference between the two simple symmetric GARCH(1,1) family models is just from the model parameters (see Table 3.3), resulting from the distinction in the volatility assumptions. The remaining steps of the measure are close to the CFE-modified Normal-GARCH(1,1) model.

### 3.3.2.4. Parametric approaches under the student distributional assumption of returns

So far, the paper has discussed in details how to estimate VaRs using the non-parametric (the historical simulation) approach and the parametric approaches under the assumption that the returns are normally standard distributed. The empirical data characteristics, nevertheless, confirmed that the extreme outcomes occur more often and are larger than predicted by the normal distribution (fat tails). Also, the volatility of both stock index returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low (volatility clustering).

Therefore, beyond the results and the performance of the models above, it is now essential to change the distributional property assumption of returns. Specifically, this section estimates the VaRs using the parametric approaches under the statement that the returns are based on the student distribution. Again, the question of whether the proposed models under this supposition performed efficiently within the current catastrophe time will be weighed up in the chapter 4.

### 3.3.2.4.1. The RiskMetrics

The calculation of the RiskMetrics model under the student assumption is partly similar to the normal-RiskMetrics. For instance, we first calculate the daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 using the RiskMetrics variance formula (2.9). Following the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% based on the student-t VaR formula (2.13). Do note here is that the student-t VaR at each probability level will be obtained from two parameters, including (i) degrees of freedom, v, which will be calculated from kurtosis of the parameter estimated period, and (ii) the critical t-value at the p% probability level and with ν degrees of freedom. For simplicity, the critical t-value is computed using the Excel function TINV.

### 3.3.2.4.2. The Normal-GARCH(1,1) model

Comparing to the Normal-GARCH(1,1) model under the normal distribution, the model based on the student distribution of returns has the same model parameters (the same conditional variance). Nonetheless, as mentioned above, their predicted daily VaRs will be different due to the difference relating to the returns distributional assumption. Accordingly, the VaR in this situation will be based on the student-t VaR. Specifically, from the model parameters estimated (see Table 3.2 above), we then calculate conditional variance for the parameter estimated period, and finally measure predicted daily VaRs for the forecasting period using the student-t VaR formula (2.13).

### 3.3.2.4.3. The Student-t GARCH(1,1) model

Finally, the Thesis employs the Student-t GARCH(1,1) model to predicting the VaRs under the student distributional assumption of returns. In fact, many previous studies found that the student distributional returns-based t-GARCH(1,1) approach outweighs the others, since it not only catches the fat tails and the volatility clustering usually occur in financial time series, but avoids the standard distributional assumption of returns which is unrealistic in forecasting market risk. In this sense, we follow the similar steps as measuring the Student-t GARCH(1,1) model under the normal distributional assumption; Except, the VaR in this situation will be based on the student-t VaR.

### 3.4. Backtesting of Value At Risk Models

In order to test the performance and the validity of the proposed VaR models under different assumptions, the paper uses the Kupiec’ and Christoffersen’ backtesting techniques to determine whether each model’s risk estimate is consistent with the assumptions on which the model is based and to check whether the models give accurate VaR predictions. In other words, the backtesting will be employed to test whether actual losses are in line with projected losses. Accordingly, the VaR measure is violated (or excepted) when the absolute value of negative return on each stock index exceeds the corresponding VaR measure.

In this sense, we initially calculate how many days on which actual loss exceeds the predicted VaRs for each index. The probabilities of violations are then calculated for each model with respect to the target rates of violations. For VaR at 95%, the target rate of violations is 5%; VaR at 97.5% and 99%, the target rate of violations is 2.5% and 1%, respectively. Next, we measure test statistics for both test cases, including the unconditional coverage tests (the Kupiec test) and the conditional coverage tests (the independence and the conditional tests).

Finally, based on the Chi-squared test, we compare the calculated statistics to the criticalvalue of the Chi-squared test with the null hypothesis is that the actual number of violations is in line with the target number of violations (the VaR model is accepted). Following describes particular paces in backtesting the VaR models.

First of all, as mentioned above, we calculate the number of days on which actual loss exceeds predicted VaR and given namely value of “1”; otherwise, value of “0” corresponding to different confidence levels, covering the current financial crisis period beginning from 01/08/2007 to 22/06/2009. In other words, if the actual negative return is more than the corresponding daily VaR estimate, it is recorded as a violation. The number of violations for the whole backtesting period is then sum up and compared to the target violation number under the particular confidence levels.

For the Kupiec test, it is a two-tailed test and belongs to the unconditional coverage tests; thus, it tests whether the exceptions lie in an interval in which the null hypothesis is not rejected. Specifically, from the formula (2.16), we estimate the test statistic LRuc which is the Chi-squared distributed with, say, 1 degree of freedom. In order to do this, we summarize the number of no violations (T0) and the number of violations (T1) with the probability p of 1%, 2.5% and 5%.

However, as discussed in the chapter 2, the Kupiec test in particular and the unconditional coverage tests in general will not examine effectively the VaR models once the violations are clustered which empirically occurred in the collected data. Consequently, we additionally employ the conditional coverage test which can test the tail losses for independence.

The formulas (2.17-2.20) mathematically give us a framework for implementing the test through calculating the number of no violations followed by a no violation (), the number of no violations followed by one violation (), the number of one violation followed by a no violation () and the number of one violation followed by one violation (). After measuring statistics values under both test cases, we lastly compare these results to the criticalvalue of the Chi-squared test. Specifically, if the calculated statistics under different probabilities exceeds the Chi-squared test value, the VaR model will be rejected and vice versa. One note here is that in order to work out the criticalvalue of the Chi-squared test, we simply use the CHIINV function in the Excel. The results and valuations of these models will be discussed in length in the chapter 4.

### Chapter 4 - Results and Analysis

This chapter, based on using the backtesting techniques, puts much emphasis on valuating the forecasting ability of the most well known VaR models, including the Historical Simulation, the RiskMetrics, the Symmetric Normal-GARCH(1,1), and the Symmetric Student-GARCH(1,1) under several distributional assumptions of returns, such as Normal, Normal modified by the Cornish-Fisher Expansion technique and Student-t.

### 4.1. Backtesting VaR approaches under the Normal Distributional Assumption of Returns

Tables 4.1a and 4.1b show test statistics and backtesting results of the selected VaR models for the two single assets. There are some observable trends showed from the backtesting results. Firstly, the performance or the strength of the models under the standard distributional assumption of returns slightly depends on the confidence levels. In other words, the better results tend to lie on the higher confidence levels and conversely the worse results are roughly at the lower confidence levels. Secondly, all models are nearly completely rejected at three confidence levels under the unconditional test (the Kupiec test, LRuc) for both the stock indexes.

In other words, the observed frequency of tail losses is not consistent with the frequency of tail losses predicted by the four VaR models (the average number of violations predicted is incorrect). In fact, it can easily be seen in terms of number of violations. Specifically, the number of actual violations for the whole backtesting period (T1) for both indexes is significantly higher than that of the target violation number (Ttarget) at the three confidence levels, indicating that the approaches underestimate the “true” VaR. Despite this, it is fairly clearly observing that the models are almost satisfied under the independence test (LRind) for both the FTSE 100 and the S&P 500, meaning that tomorrow’s violation depends on whether there was a violation today.

Another signal is that the historical simulation, which does not solely rely on sample observations, performs the worst comparing to the others. Indeed, the model is almost rejected at all probability levels for both the indexes. In contrast, although not absolutely powerful, the Normal-GARCH(1,1) and the Student-t GARCH(1,1) approaches evidently are relatively much better than the other two approaches at the highest confidence level (99%) for both indexes. These tendencies might also simply be seen from the figures 4.1a, 4.1b; 4.2a, 4.2b and 4.3a, 4.3b below which reflect the combination between the estimated VaR measures under the returns normal distributional assumption at different values of confidence levels and actual returns for both FTSE 100 and S&P 500, covering the recent global credit crunch period.

Table 4.1a: Test Statistics and Backtesting Results of the Proposed VaR Models under the Returns Normal Distribution for the FTSE 100

* Note: Although the HS has nothing to do with the returns distributional assumption, we still combine it into this table to compare its results with the others.

** Note: The conditional coverage test is calculated with 2 degrees of freedom at 10% significance level since the test is simply the sum of the individual tests results for unconditional coverage and independence (= 4.605).

Table 4.1b: Test Statistics and Backtesting Results of the Proposed VaR Models under the Returns Normal Distribution for the S&P 500

* Note: Although the HS has nothing to do with the returns distributional assumption, we still combine it into this table to compare its results with the others.

** Note: The conditional coverage test is calculated with 2 degrees of freedom at 10% significance level since the test is simply the sum of the individual tests results for unconditional coverage and independence (= 4.605).

Figure 4.1a: Predicted Volatility of FTSE 100 at 99% Confidence Level under the Normal Distributional Assumption of Returns

Figure 4.1b: Predicted Volatility of S&P 500 at 99% Confidence Level under the Normal Distributional Assumption of Returns

Figure 4.2a: Predicted Volatility of FTSE 100 at 97.5% Confidence Level under the Normal Distributional Assumption of Returns

Figure 4.2b: Predicted Volatility of S&P 500 at 97.5% Confidence Level under the Normal Distributional Assumption of Returns

Figure 4.3a: Predicted Volatility of FTSE 100 at 95% Confidence Level under the Normal Distributional Assumption of Returns

Figure 4.3a: Predicted Volatility of S&P 500 at 95% Confidence Level under the Normal Distributional Assumption of Returns

From these figures above, it is obviously noticed that the historical simulation is especially under-responsive to changes in conditional volatility comparing to the others at the three confidence levels, causing the approach underestimates the “true” VaR. Put in different terms, the VaR estimate using the historical simulation method has almost no response to the crash, especially in the peaking ending months of 2008 in both the UK and US stock markets. More specifically, during the volatile time, the VaR measure using the historical simulation is almost at essentially the same level as it was in the months before the crisis dramatically peaked.

The main reason might be that the method assumes the historically simulated returns are independently and identically distributed through time. This assumption is unrealistic because the empirical data reveals the volatility of assets returns tends to change over time, and that periods of high and low volatility tend to cluster together. In other words, the historical simulation method does not update the VaR number quickly when the market volatility increases. Similarly, the RiskMetrics model does not achieve impressively during the devastation period, especially at the lower confidence levels. Whereas the RiskMetrics system suggests estimating VaR under the 95% confidence level, it is converse that at such level the J.P. Morgan’s method is almost rejected.

It is believed that this result may cause from its impractical normally distributional assumption. Specifically, since there are far more outliers in the actual return distribution than would be expected given the normality assumption, the actual VaR is to be much higher than the computed VaR, meaning the model does not provide an accurate figure. In contrast, the two simple GARCH(1,1) family models, even though, are also based on the normal distribution, it is intuitively observed that they are able to moderately catch fatter tails and volatility clustering occurred in the empirical data, especially under the 99% confidence level.

A key motivation might be these models significantly connected the impact of ‘old’ news on volatility. Furthermore, comparing to the historical simulation approach, the GARCH-related methods are likely to handle the change in the distribution much better by attaching decaying weights to the historical observations, so that past returns become less and less important as time passes.

In spite of this, it is evidently undoubted that under the standard distributional assumption of returns, the parametric models above still underestimate the VaRs comparing to the actual losses, because the normality assumption of the standardised residuals seems not to be consistent with the behaviour of financial returns, and therefore, reflecting that they do not perform vigorously during the current financial turbulent period. Danielsson (2008) sum up that the credit crunch, which began in the summer of 2007, presents that VaR-based risk models are of somewhat lower quality than was generally believed.

As implied in the chapter 3, we purposely employ the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the normal distribution to significantly account for fatter tails. The following section will analyse results of the VaR approaches under the normal distributional assumption of returns modified by the CFE technique.

### 4.2. Backtesting VaR approaches under the Normal Distributional Assumption of Returns modified by the CFE technique

So far, we have backtested the selected normal-based parametric VaR models in the combination with the non-parametric model which has nothing to do with the returns distributional assumptions. Again, the results verified in the section 4.1 reflecting that the normal-based parametric VaR models are not perfectly effectively during the recent volatile time. A key reason might be that the empirical distribution is to the left of the normal distribution, indicating that very low returns (very high negative returns) occur much more under the empirical distribution than the normal distribution.

In brief, the left tail of the empirical distribution is fatter than that of the normal distribution (also see figures 3.7a, 3.7b and figures 3.8a, 3.8b). Accordingly, in order to estimate precisely a parametric VaR when the distribution is not normal, the Thesis additionally employs the CFE technique to accommodate non-normal skewness and excess kurtosis of the empirical distribution in relation to the normal distribution.

Tables 4.2a and 4.2b below demonstrate test statistics and backtesting results of the selected VaR models using the CFE technique for the two single assets. In terms of number of violations, the results for both the indexes from the backtesting are moderately complicated. Specifically, at the 99% confidence level, whereas the three parametric models overestimate the “true” VaR for the FTSE 100 and therefore rejected by the Kupiec test, they are wholly accepted at the same probability level for the S&P 500, even under all the three tests, since number of actual violations for the whole backtesting period (T1) is almost consistent with that of the target violation number (Ttarget).

This circumstance is absolutely opposite under the lower confidence level, 97.5%. Whilst the three parametric models are accepted under the three backtesting methods for the FTSE 100, they are poorly rejected by the Kupiec test, since number of actual violations for the whole backtesting period (T1) is much higher than that of the target violation number (Ttarget) (underestimating the actual losses). In spite of these complexities, it is noticeable that there are many more apparent improvements than that of the previous circumstance. Firstly, the performance of the selected models with the modification of the CFE is significantly much better than that under the returns purely normal distributional assumption for both the FTSE 100 and the S&P 500.

It is fairly clearly seen that the parametric VaR models applied by the CFE technique produce results more greatly powerful than that under the assumption without applying this technique, especially the two simple GARCH(1,1) family models. In fact, these CFE-modified symmetric GARCH(1,1) models at 2.5% probability level for the FTSE 100 and 1% probability level for the S&P 500 are accepted under the three tests. Therefore, comparing to the previous assumption, it can be sum up that the Normal-GARCH(1,1) approach and the Student-GARCH(1,1) approach execute relatively robustly under the normal distributional assumption of returns modified by the CFE technique (the non-normal distributional assumptions of returns).

Likewise, the RiskMetrics model produce the VaRs more impressive than itself without the CFE, especially at the 97.5% and 99% confidence level. It is believed that this might result from additionally applying the CFE to correct the critical z-value of the normal distribution and accounting much for the fat tails.

Figures 4.4a, 4.4b; 4.5a, 4.5b and 4.6a, 4.6b below draw the combination between the estimated VaR measures under the returns normal distributional assumption modified by the CFE technique at different values of confidence levels and actual returns for both the FTSE 100 and the S&P 500, stretching from 01/08/2007 to 22/06/2009.

Table 4.2a: Test Statistics and Backtesting Results of the Proposed VaR Models under the Returns Normal Distribution modified by the CFE for the FTSE 100

* Note: Although the HS has nothing to do with the returns distributional assumption, we still combine it into this table to compare its results with the others.

** Note: The conditional coverage test is calculated with 2 degrees of freedom at 10% significance level since the test is simply the sum of the individual tests results for unconditional coverage and independence (= 4.605).

Table 4.2b: Test Statistics and Backtesting Results of the Proposed VaR Models under the Returns Normal Distribution modified by the CFE for the S&P 500

* Note: The conditional coverage test is calculated with 2 degrees of freedom at 10% significance level since the test is simply the sum of the individual tests results for unconditional coverage and independence (= 4.605)

Figure 4.4a: Predicted Volatility of FTSE 100 at 99% Confidence Level under the Normal Distributional Assumption of Returns modified by the CFE

Figure 4.4b: Predicted Volatility of S&P 500 at 99% Confidence Level under the Normal Distributional Assumption of Returns modified by the CFE

Figure 4.5a: Predicted Volatility of FTSE 100 at 97.5% Confidence Level under the Normal Distributional Assumption of Returns modified by the CFE

Figure 4.5b: Predicted Volatility of S&P 500 at 97.5% Confidence Level under the Normal Distributional Assumption of Returns modified by the CFE

Figure 4.6a: Predicted Volatility of FTSE 100 at 95% Confidence Level under the Normal Distributional Assumption of Returns modified by the CFE

Figure 4.6b: Predicted Volatility of S&P 500 at 95% Confidence Level under the Normal Distributional Assumption of Returns modified by the CFE

The figures 4.4a, 4.4b and 4.5a, 4.5b show that at the 99% and 97.5% confidence level, whereas the historical simulation approach undervalues the actual losses for both the indexes, the parametric models’s results are much better if compare them to results which are under the normal distribution, since they strongly cover the real losses, even much overvalue the losses in the ending months of 2008. In contrast, we find that there are a lot of extreme losses exceed the predicted VaRs, and hence, there are almost no model can capture fat tails occurred within the crisis time at 95% confidence levels (see figures 4.6a, 4.6b). So far, it is undoubtedly summarised that the proposed VaR methods do not achieve efficiently under the low confidence level, such as 95% during the volatile period.

Overall, so far it is strongly thought that except the historical simulation approach, the three parametric models produce the VaRs relatively powerfully at the higher confidence level under the assumption that the returns are non-normal distributed. In particular, the simple GARCH(1,1) family models significantly handle periods with heavy fluctuations much better than the other methods. According to Oh and Kim (2007), this results from filtering the volatility clustering effect, and hence, reduce the volatility clustering significantly.

Also, as Goorbergh and Vlaar (1999) exposed, the symmetric GARCH(1,1) family models incorporate new information each day, and thus make their predictive performance superior to the others. In fact, Tables 4.2a, 4.2b obviously demonstrate the superiority of the GARCH approaches when it comes to the accuracy of the VaR as the average exception numbers are significantly as equally as the expected ones at the 97.5% and 99% confidence level for the FTSE 100 and the S&P 500, respectively.

### 4.3. Backtesting VaR approaches under the Student Distributional Assumption of Returns

As mentioned above, the empirical evidence strongly indicates that assets returns exhibit nonzero skewness and excess kurtosis, particularly at high frequencies. This reflects that once the VaR models are applied based on the standard distributional assumption of returns, they can produce the inaccurate VaRs comparing to the true VaRs. The section 4.1 substantiate clearly this circumstance. This explicates why we then additionally employ the CFE technique to accurating the critical z-value from the normal distributional assumption, and accounting much more for the fat tails.

As a result, the performance of the proposed parametric VaR approaches is significantly better than themselves before applying the CFE, especially the GARCH-related methods. Nevertheless, so far the paper expectably has not confirmed the best model yet which should be applied during the crisis period. In this sense, the study intentionally examines the selected VaR models under another distributional assumption, the Student distributional of returns.

Tables 4.3a and 4.3b below exhibit test statistics and backtesting results of the proposed VaR models under the Student distributional assumption of returns. The backtesting results have shown several new academic points. First, the RiskMetrics which usually measured under the normal distributional assumption of returns perform very excellent at the highest confidence level under the Student distribution assumption of returns for both the indexes, even much better than the GARCH-related methods.

Particularly, at the 99% confidence level, the J.P Morgan’s model is completely accepted by the three backtesting methodologies, since the number of actual violations for the whole backtesting period (T1) for both indexes is fairly as accurately as that of the target violation number (Ttarget), representing that the approach estimate perfectly the ”true” VaR. This might be a new point in literature of standard deviation-based market risk management. As discussed in the chapter 2, the RiskMetrics are the subject concerned by a lot of previous studies. Most of these assume that the assets returns are normally distributed and consequently the approach is almost rejected because recent financial data experiences much more fat tails than that of the standard distribution.

In contrast, our result, which is based on the Student distribution assumption of returns reveals that predicted VaR using the RiskMetrics method is consistent with the true VaR at the 99% confidence level during the current crash period. The second point relates to the GARCH(1,1) family models. It is intuitively seen that they are not better than that under the normal distributional assumption of returns modified by the CFE. Also, there is no evidence showing that the t-GARCH(1,1) model outweighs the N-GARCH(1,1) model under the Student distributional assumption of returns. Last but not least, similar to the two previous sections, during the turbulent stage, the selected VaR approaches are roughly rejected under the low confidence level, such as 95% confidence level and should be measured at the high confidence levels, such as 99% or 99.9%.

Figures 4.7a, 4.7b; 4.8a, 4.8b and 4.9a, 4.9b below illustrate the combination between the estimated VaR measures under the returns student distributional assumption at three confidence levels and actual returns for both the FTSE 100 and the S&P 500, spanning the recent worldwide financial volatile period.

Table 4.3a: Test Statistics and Backtesting Results of the Proposed VaR Models under the Student Distribution Assumption of Returns for the FTSE 100

Figure 4.7a: Predicted Volatility of FTSE 100 at 99% Confidence Level under the Student Distributional Assumption of Returns

Figure 4.7b: Predicted Volatility of S&P 500 at 99% Confidence Level under the Student Distributional Assumption of Returns

Figure 4.8a: Predicted Volatility of FTSE 100 at 97.5% Confidence Level under the Student Distributional Assumption of Returns

Figure 4.8b: Predicted Volatility of S&P 500 at 97.5% Confidence Level under the Student Distributional Assumption of Returns

Figure 4.9a: Predicted Volatility of FTSE 100 at 95% Confidence Level under the Student Distributional Assumption of Returns

Figure 4.9b: Predicted Volatility of S&P 500 at 95% Confidence Level under the Student Distributional Assumption of Returns

The figures 4.7a, 4.7b show that at the 99% confidence level, the three parametric methods capture relatively effectively the fat tails and the volatility clustering arised in the empirical data for the FTSE 100 and the S&P 500. Adversely, at the lower confidence levels, such as 97.5% and 95%, the models are almost weakly in covering the actual excessive losses, especially at the peakedness time of the credit crunch when the extreme losses reached approximately -10% for both the indexes. Evidence is that there are a lot of severe losses exceeded the predicted VaRs during the forecasting period (also see figures 4.8a, 4.8b and 4.8a, 4.8b). This explains why the models are roughly rejected by the suggested backtesting techniques.

### Conclusions of Analysis:

Under the normal distributional assumption of returns, the proposed VaR models are almost poorly in capturing the fat tails and the volatility clustering when producing the VaRs inconsistent with the “true” VaRs. Nevertheless, it is undoubtedly noticed that the GARCH-related methods are significantly much better than the others at the highest confidence level.

After additionally applying the CFE technique, the parametric models considerably perform efficiently much better. Again, the simple GARCH(1,1) family models produce the VaRs relatively powerfully through significantly handling periods with heavy fluctuations much better than the other methods. Similarly, the RiskMetrics approach also achieves fairly effectively under the non-normal distributional assumption of returns.

Finally, under the student distributional assumption of returns, the J.P Morgan’s approach (the RiskMetrics) performs exceedingly well at the highest confidence level, even much better than that of the GARCH(1,1)-related methods. Interestingly, the GARCH(1,1) approaches do not produce better results than that of the two previous sections.

### Another new literature points:

During the recent crisis period, the selected VaR models almost only perform fairly effectively at the 99% confidence level. For financial institutions, such as Banks, this quantile is in line with external regulatory capital requirements. In contrast, they are roughly rejected at the 95% confidence level. This can be very difficult for an internal risk management model, since almost companies control their risk exposure to the typical number is around 5% (also see Benninga and Wiener, 1998; Jorion, 2000).

There is no evidence showing that the t-GARCH(1,1) model outweighs the N-GARCH(1,1) approach in predicting the VaRs.

### Chapter 5 – Conclusions Xem VAR 24

Backtesting was used to investigate the performance of the various methods with respect to specified confidence intervals.

In this paper the relatively novel risk management concept of Value-at-Risk has been examined. Many practitioners have embraced Value-at-Risk as an easy to understand measure of the downside risk on an investment portfolio. Value-at-Risk has not only found its way to the internal risk management of banks and other financial institutions, but we have seen that it has also been firmly rooted in the regulations that supervisors have imposed on them. And although these regulations have been subject to some criticism the Basle Committee has rather arbitrarily set certain parameters, notably the (range of the) multiplication factor|, it is generally felt that they constitute a vast improvement on the former rigid legislation.

The study collected by Goorbergh and Vlaar (1999) has been concerned with the Value-at-Risk analysis of the stock market. A wide variety of Value-at-Risk models has been presented and empirically evaluated by applying them to a fictitious investment in the Dutch stock market index AEX, mainly for illustrative purposes. Subsequently, a more rigorous approach was taken by applying all of the presented Value-at- Risk techniques to another stock market index, the Dow Jones Industrial Average. The generous availability of historical daily return data on this index allowed us to more realistically imitate the behaviour of banks, namely by re-estimating and re-evaluating the Value-at-Risk models each year. The main conclusions are:

1. By far the most important characteristic of stock returns for modelling Value-at-Risk is volatility clustering. This can effectively be modelled by means of GARCH. Even at the lowest left tail probabilities (up to 0.01%), modelling GARCH effectively reduces average failure rates and the fluctuation of failure rates over time, whereas at the same time the average VaR is lower.

2. For left tail probabilities of 1% or lower, the assumed conditional distribution for the stock returns needs to be fat-tailed. The Student-t distribution seems to perform better in this respect then the Bernoulli-normal mixture. At the 5% level, the normal distribution performs best.

3. Tail index techniques are not successful, due to the fact that they do not cope with the volatility clustering phenomenon. At the 1% level the assumption of a constant VaR throughout a year resulted in up to 35 violations of the VaR in one year (1974). Even at the 0.1% level, the average number of VaR violations over the last 39 years was significantly higher than to be expected.

Limitation [SUA LAI CHO DUNG]

(a) Modeling Multivariate Returns

The analysis in this thesis is limited to investment portfolios consisting of a single asset such as equity index. Investors, however, generally hold portfolio of multiple assets. Therefore, in practical applications it is often necessary to aggregate statistics of individual assets into portfolio aspects. If any component of the portfolio is uncorrelated with other components in the portfolio, then it is easy to obtain the portfolio return distribution, and its VaR.

This, however, cannot be realized with in the real world, as the estimation technique doesn’t consider the correlations across the assets. Zangari (1996) and Venkataraman (1997) argue that the estimation of variable correlations by practical means is impossible.

They simply suggest assuming how the portfolio components are correlated. Again, further studies needed to be carried out to tackle with the correlation problem.

(b) Tradeoff between time and programming easiness

Through the whole thesis, I use MATLAB software by writing computer programs to conduct SVM parameter estimation and VaR calculation. Since MATLAB has a lot of useful system functions (such as fitting of GARCH model, etc) and can directly conduct matrix computation, the program can be easily written so long as the algorithm is well structured. From this aspect, MATLAB is much superior to C programming language since C does not have related system functions or matrix computation commands that can easily solve the problem.

This brings a lot of complexity and difficulty to the program writing. However, with respect to the program executing time, C has great advantages. For example, in estimating SVM parameters, the job can be finished within half day in C but whole day for MATLAB under the same conditions. On the other hand, change in the algorithm might also greatly improve time efficiency. For example, in the SVM estimation process, if I use rejection approach rather than Griddy Gibbs sampling method, the program running time could be largely reduced. However, the program will be much more complicated. Thus, how to deal with such tradeoff is also a troublesome question.

An important limitation of the analysis, however, is that it does not consider portfolios containing options or other positions with nonlinear price behavior.

### Suggestion of Further Research

it is very important to develop methodologies that provide accurate estimates.

According to Danielsson (2008), one of the most important lessons from the subprime crisis has been the exposure of the unreliability of models and the importance of management.