# Applications and Limitations of Value at Risk

7122 words (28 pages) Essay in Finance

08/02/20 Finance Reference this

**Disclaimer:** This work has been submitted by a student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.

###
Abstract

In this article, I would like to make a detailed introduction about VaR, an important method to value risk. I am going to explain the concept of VaR and describe the calculation of it. Then I would about some special tools such as marginal VaR, incremental VaR and component VaR. The most important part is that I would list the limitations of VaR and find some effective ways to replace it.

Key Words: VaR, Application, Limitation

###
Introduction

Value at Risk is an effective measure of the potential loss for one investment. It is a statistical technique used to measure and quantify the level of financial risk within a firm or investment portfolio over a specific time frame. This metric is most commonly used by investment and commercial banks to determine the extent and occurrence ratio of potential losses in their institutional portfolios. Risk managers use VaR to measure and control the level of risk exposure. One can apply VaR calculations to specific positions or whole portfolios, or to measure firm-wide risk exposure. In this paragraph, I will explain the concept of VaR and some general tools of it.

(1) Definition

Given a confidence level α ϵ (0,1), Value-at-Risk of a portfolio X at α over the time period t is given by the smallest number k ϵ R such that the probability of a loss over a time interval t greater than k is 1-α.

VaR_{α}(X)=inf {x ϵ R: F_{X}(x)>α}

In order to calculate the number of VaR, a straightforward way is to use distribution of the portfolio return. Given

${f}_{X}$the probability density function (pdf) of X and α the confidence interval, then the VaR number of a time interval can be calculated with the following equation:

1-α=

${\int}_{\u2013\infty}^{\mathit{VaR}}{f}_{X}\left(x\right)\mathit{dx}$The most common example is normal distribution. For a given portfolio, if the portfolio return is normally distributed with mean µ and standard deviation σ, the VaR number can be obtained as following. Through the standard normal table, there is a number corresponding to the confidence level α. For example, if α is chosen to be 95%, the corresponding number is 1.65, and if α is 99%, then the corresponding number is 2.33. Since VaR corresponds to the left tail, the actual cutting line is negative, as shown in the following figure.

Moreover, for any general distribution, we can do a standard transformation

$z=\frac{x\u2013\mu}{\u01a1}$ To obtain the VaR number. Furthermore, if F_{∆P}(x) is the cumulative distribution function (cdf) of X, the equation can be written as 1- α =

F_{x} (VaR)

(2) VaR Tools

An important objective of the VaR approach is to control and manage risks. Therefore, in addition to calculating the VaR of the entire portfolio, we also want to know which asset contributes the most to the total risk, what impact will it have if assets are deleted or added, and so on. In this section, a detailed analysis of VaR tools is introduced to control and manage portfolio risk.

a) Marginal VaR

The first tool for risk management is the marginal VaR, which is defined as the partial derivative with respect to the component weight. It measures the change in portfolio VaR resulting from adding additional dollar to a component. Take the partial derivative of the variance with respect to

${\u0461}_{i}:$$\frac{\partial {\sigma}_{P}^{2}}{\partial {\mathrm{\u0461}}_{i}}$

=2

${\mathrm{\u0461}}_{i}{\sigma}_{i}^{2}+2\sum _{j=1,\ne i}^{N}{\mathrm{\u0461}}_{j}{\mathrm{\sigma}}_{\mathit{ij}}$$=2\mathit{cov}\left({R}_{i},{\mathrm{\u0461}}_{i}{R}_{i}+\sum _{j\ne i}^{N}{\mathrm{\u0461}}_{j}{R}_{j}\right)$

$=2\mathrm{cov}({R}_{i},{R}_{P})$

Since

$\partial {\sigma}_{p}^{2}=2{\mathrm{\sigma}}_{p}\partial \sigma $, the above equation is equivalent to

$\frac{\partial {\mathrm{\sigma}}_{p}}{\partial {\u0461}_{i}}=\frac{\mathit{cvo}({R}_{i},{R}_{p})}{{\mathrm{\sigma}}_{p}}$,

Therefore,

∆

${\mathrm{VaR}}_{i}=\frac{\partial \alpha {\mathrm{\sigma}}_{p}}{\partial {x}_{i}}=\alpha \frac{\partial {\mathrm{\sigma}}_{p}}{\partial {w}_{i}p}p=\mathrm{\alpha}\frac{\partial {\mathrm{\sigma}}_{p}}{\partial {w}_{i}}=\alpha \frac{\mathit{cvo}({R}_{i},{R}_{p})}{{\mathrm{\sigma}}_{p}}=\frac{\partial {\mathrm{\sigma}}_{\mathit{ip}}}{{\mathrm{\sigma}}_{p}}$.

The marginal VaR for I th component is ∆

${\mathrm{VaR}}_{i}=\frac{\partial {\mathrm{\sigma}}_{\mathit{ip}}}{{\mathrm{\sigma}}_{p}}$. The marginal VaR is closely related to vector β, which has its i th component defined by

${\beta}_{i}=\frac{\mathit{cvo}({R}_{i},{R}_{p})}{{\sigma}_{p}^{2}}=\frac{\partial {\mathrm{\sigma}}_{\mathit{ip}}}{{\sigma}_{p}^{2}}$. Recall that

${\sigma}_{p}^{2}={w}^{T}\sum w$, the vector β can be expressed in matrix notation as β

$=\frac{\sum w}{{w}^{T}\sum w}$. Since

${\mathrm{\rho}}_{12}={\mathrm{\sigma}}_{12}/{\mathrm{\sigma}}_{1}{\mathrm{\sigma}}_{2}$, we have

${\beta}_{i}=\frac{\mathrm{}{\mathrm{\rho}}_{\mathit{ip}}{\mathrm{\sigma}}_{i}{\mathrm{\sigma}}_{p}}{{\sigma}_{p}^{2}}=\mathrm{}{\mathrm{\rho}}_{\mathit{ip}}\frac{{\mathrm{\sigma}}_{i}}{{\mathrm{\sigma}}_{p}}$. Thus the relationship between ∆

${\mathrm{VaR}}_{i}$and

${\beta}_{i}$is

∆

${\mathrm{VaR}}_{i}=\alpha $(

${\beta}_{i}\times {\mathrm{\sigma}}_{p})=\frac{\mathit{VaR}}{p}\times {\beta}_{i}$.

b) Incremental VaR

Another tool for risk management is the incremental VaR, which measures the change in VaR due to a new position on the portfolio. Let a be the new position added, and

${a}_{\mathit{i}}$is the amount invested on asset i. Then intuitively the incremental VaR can be defined by the difference between the new VaR and original VaR, i.e.

Incremental

$\mathrm{VaR}={\mathrm{VaR}}_{\mathrm{p}+\mathrm{a}}\u2013{\mathrm{VaR}}_{\mathrm{p}}$However, to calculate the VaR for the new portfolio, we need to compute the new covariance matrix, which might be time consuming. Therefore, the following approximation sometimes is used to shorten the computation time.Expanding

${\mathrm{VaR}}_{p+\mathrm{a}}$, we have

${\mathrm{VaR}}_{p+\mathrm{a}}={\mathrm{VaR}}_{p}+{\left(\u2206\mathrm{VaR}\right)}^{T}\times \mathrm{a}+\dots $

Thus, when a is small relative to P,

Incremental VaR

$\mathrm{}\approx {\left(\u2206\mathrm{VaR}\right)}^{T}\times \mathrm{a}$Therefore, we can simultaneously compute ∆

${\mathrm{VaR}}_{p}$and

${\mathrm{VaR}}_{p}$. When a new trade is added to the portfolio, the approximation of incremental VaR can be immediately known by formula.

If only one asset is added to the portfolio, we can choose the amount to invest so that the risk is minimized. This action is also called best hedge.

Suppose amount ai is invested on asset i, then the new portfolio

${p}_{N}=p+{\mathrm{a}}_{i}$, and the variance of returns for

${p}_{N}$is

${\sigma}_{N}^{2}{p}_{N}^{2}={\sigma}_{p}^{2}{p}^{2}+2{\mathrm{a}}_{i}p{\sigma}_{\mathit{ip}}^{2}+{\mathrm{a}}_{i}^{2}{\sigma}_{i}^{2}$

Differentiating with respect to

${\mathrm{a}}_{i}$, we get

$\frac{\partial {\sigma}_{N}^{2}{p}_{N}^{2}}{\partial {\mathrm{a}}_{i}}=2p{\mathrm{\sigma}}_{\mathit{ip}}+2{\mathrm{a}}_{i}{\sigma}_{i}^{2}$

Thus the best hedge occurs when the equation equals to zero, or

${{\mathrm{a}}_{i}^{2}}^{*}=\u2013p\frac{{\mathrm{\sigma}}_{\mathit{ip}}}{{\sigma}_{i}^{2}}$Recall the definition of β, the optimal of

${\mathrm{a}}_{i}$can also be computed by the following formula:

${{\mathrm{a}}_{i}^{2}}^{*}=\u2013p{\beta}_{i}\frac{{\sigma}_{p}^{2}}{{\sigma}_{i}^{2}}$

c) Component VaR

The other tool that is extremely useful to manage risk is the component VaR, which is a partition of the portfolio VaR that indicates the change of VaR if a given component was deleted. We can use it to have a risk decomposition of the current portfolio. As discussed before, the sum of individual VaR is not so useful since it discards the diversification effects. Thus, we define the component VaR in term of marginal VaR as follows:

Component VaR = (∆

${\mathrm{VaR}}_{i})\times {w}_{i}p=\mathrm{VaR}{\beta}_{i}{w}_{i}$###
Calculation Method of VaR.

###
Statistically, the value of VaR can be estimated with two different ways, and these methods are used given a known probability distribution. The number estimatiing method is used to determine the VaR value. When the unknown distribution is used, the non-parametric method is to directly introduce the quantile, and the quantile value is used as the VaR. Accordingly, VaR measuring models can be divided into two general categories: parametric models and nonparametric models. The parametric model estimates the VaR by assuming that the yield of the portfolio is subject to a certain distribution, such as JP Morgan’s Risk metrics, GARCH model, etc. The nonparametric model does not need to make any assumptions about the yield distribution of the portfolio. It applies Some historical data analysis simulations to estimate VaR values, such as historical simulation, Monte Carlo simulation.

(1) Historical simulation

###
Assuming that the distribution of returns is independent and identical, the future fluctuations of market factors are exactly the same as historical fluctuations. Consider the changes in historical samples to simulate the future distribution of real asset returns. Then find the quantile corresponding to a certain confidence level to determine the VaR.

###
The method is simple and intuitive, easy to understand, and does not to make any assumptions about the statistical distribution of the rate of return, avoiding parameter estimation errors and reflecting the thick tail phenomenon of data and the autocorrelation between data precisely. Compared to the parametric method, at the lower tail critical point, the results of historical simulations may be more accurate at making predictions. In addition, it can also be effective at nonlinear combination. Based on this robustness and intuitiveness, the Basel Committee on Banking Supervision adopted in 1993 historical simulation as the basic measure of market risk.

###
The main problem with this approach is that the return is consistent with historical changes and subject to independence. It is assumed that the distribution and probability density functions do not change with time and are inconsistent with actual financial market conditions. When the volatility changes greatly in the short term, the sample size will have a large impact on the prediction results. The estimation is inaccurate and therefore requires longer and more accurate data (at least 5 years). The “experienced distribution” obtained by this method is generally discontinuous and does not provide the greatest loss prediction beyond the sample point. This simulation method is more complementary than the parameter method.

(2) Moving Average

Moving average (MA) is one of the most popular and easy to use tools available to measure time-varying risk. By using an average of prices, moving average provides a smooth trend line, which can be used to predict future changes in the risk factor. Suppose we have the data of returns rt over n days and we choose to use a M-day average. Then the day M is the first day possible to compute an average, and the average variance can be obtained by

${\sigma}_{M}^{2}=\frac{1}{M}\sum _{t=1}^{M}{r}_{t}^{2}$

The variance for the day M + 1 can be got by adding the newest data

${r}_{M+1}$and dropping the earliest data

${r}_{1}$. Continue the process in this way: each day, the variance is updated by adding the most recent day’s information and dropping the information M days ago, and divide the sum by M. The general formula for average computation is as follows:

${\sigma}_{N}^{2}=\frac{1}{M}\sum _{t=1}^{M\u20131}{r}_{t\u2013i}^{2}$

When we use up all n days of data, we can fit these points with a smooth line, and this line can indicate the trend of changes.

**(3)** GARCH Estimation

Because the GARCH model has a good description of the characteristics of financial time series and the time variation of variance and the ability to handle thick tails, so it is better to grasp the estimation of the VaR model than other volatility estimation methods. The equation usually includes two equations, the first is the autoregressive or conditional mean equation, and the other is the conditional variance qquations which iterate out the volatility values for each period.

Define

${h}_{t}$as the conditional variance, i.e. the forecast of the variance of a time series at time t based on previous data. In GARCH model,

${h}_{t}$is a function of previous 24 conditional variance up to time t – p, and previous returns up to time t – q, and the value of

${h}_{t}$in the GARCH(p, q) process is

${h}_{t}={\alpha}_{0}+{\alpha}_{1}{r}_{t\u20131}^{2}+{\alpha}_{2}{r}_{t\u20132}^{2}+\dots +{\alpha}_{p}{r}_{t\u2013p}^{2}+{\beta}_{1}{h}_{t\u20131}+{\beta}_{2}{h}_{t\u20132}+\dots +{\beta}_{q}{h}_{t\u2013q}$

Where

${r}_{t}$is the return on day t and

${\mathit{h}}_{t}$is the conditional variance on day t.

Here we focus our attention to the simplest case, which is GARCH(1,1) process. In GARCH (1, 1) model, the value of

${\mathit{h}}_{t}$can be calculated from the return

${r}_{t\u20131}$and the conditional variance

${\mathit{h}}_{t\u20131}$by the following formula

${h}_{t}={\alpha}_{0}+{\alpha}_{1}{r}_{t\u20131}^{2}+{\beta}_{1}{h}_{t\u20131}$

Computation of average standard deviation: Since the return

${r}_{t}$is a normal variable with mean zero and variance

${\mathit{h}}_{t}$, we have

${r}_{t}=\sqrt{{\mathit{h}}_{t}}{\mathit{\epsilon}}_{t}\mathit{with}{\mathit{\epsilon}}_{t}~N(0,1)$

###
Limitations of VaR

(1) VaR does not measure worst case loss

VaR Ignores the tail event. 99% percent VAR really means that in 1% of cases (that would be 2-3 trading days in a year with daily VAR) the loss is expected to be greater than the VAR amount. Value at Risk does not say anything about the size of losses within this 1% of trading days and by no means does it say anything about the maximum possible loss. The worst case might be only a few percent higher than the VAR, but it could also be high enough to liquidate your company. You can see the example of this GS in the subprime mortgage, to see the effect of this 1%! The example mentions the degree of deviation of 25 sigma, which is a typical super black swan. Eight sigma is the probability that it will occur once since the birth of the Earth. 9 sigma is already the limit that matlab can calculate. 25 sigma is probably the so-called “once 1.3e+135 years”.

(2) VaR can be misleading: false sense of security

This index is too direct and easy to mislead. I want to say that we all understand the definition of VaR, but when this number appears in front of you, the illusion that is easy to cause is “my biggest loss”, so that it creates an illusion of security. However, this maximum possible loss is defined in a confidence interval strictly less than one.Unfortunately, in reality 99% is very far from 100% and here’s where the limitations of VAR and their incomplete understanding can be fatal.

(3) VaR is not that effective and accurate

The three most commonly used: weighted var-cov methods, simple historical simulations, simple monte-carlo simulations. It can only be used in general situations. But when the portfolio is large, complicated and not linear, it is very painful to calculate the covariance matrix. For the case of skew and excess kurtosis, if it is calculated as normal, it will be underestimated. At the same time, VaR has no additivity (even no secondary additivity). Therefore, the calculations after adjusting the position must be repeated, which is very troublesome. At the same time, the results of different calculation methods tend to be significantly different which is also relatively tangled, because you do not know which is the most representative.

###
Approaches of Overcoming the Limitations

(1) Expected Shortfall (conditional VaR)

Expected Shortfall is defined as the average of all losses which are greater or equal than VaR, i.e. the average loss in the worst (1- α) % cases, where α is the confidence level. Said differently, it gives the expected value of an investment in the worst q% of the cases. Another important advantage of ES is that it satisfies the sub-additive. This method mainly studies the mean value of the tail loss, assuming that the weight of each loss is as large as the average value of the tail extreme value. The calculation result is closer to the actual situation, but the ES is more complicated to calculate. Here is the mathematical definition.

If X is the is the payoff of a portfolio at some future time and 0 < α<1 then we define the expected shortfall as

$\mathrm{E}{S}_{\alpha}=\u2013\frac{1}{\alpha}{\int}_{0}^{\alpha}{\mathit{VaR}}_{y}\left(X\right)\mathit{dy}$

where

${\mathit{VaR}}_{y}$is the Value at risk. This can be equivalently written as

$\mathrm{E}{S}_{\alpha}=\u2013\frac{1}{\alpha}(E\left[X{1}_{\left\{X<{X}_{\alpha}\right\}}\right]+{X}_{\alpha}\left(\alpha \u2013P\left[X<{X}_{\alpha}\right]\right))$

Where

${X}_{\alpha}=\mathrm{inf}\{x\u03f5R:P\left(Xx\right)\mathrm{}\alpha \}$is the lower

$\mathrm{}\alpha $-quantile and

${1}_{A}\left(x\right)=\left\{\begin{array}{c}1\mathrm{}\mathit{if}\mathrm{}x\mathrm{}\u03f5\mathrm{}A\\ 0\mathit{else}\mathrm{}\end{array}\right.$is the indicator function. The dual representation is

$\mathrm{E}{S}_{\alpha}={}_{\mathit{Q\u03f5}{Q}_{\alpha}}{}^{\mathit{inf}}{E}^{Q}\left[x\right]$

where

${\textcolor[rgb]{}{Q}}_{\textcolor[rgb]{}{\alpha}}\textcolor[rgb]{}{\mathrm{}}$is the set of probability measures.

(2) Extreme Value Theory (EVT)

Extreme Value Theory (EVT) takes a different approach to calculating VaR EVT concentrates on estimating the shape of only the tail of a probability distribution Given this shape, we can find estimates for losses associated with very small probabilities, such as the 99.9% VaR A typical shape used is the Generalized Pareto Distribution that has the following form: Extreme Value Theory Here, a, b, and c are variables that are chosen so the function fits the data in the tail The main problem with the approach is that it is only easily applicable to single risk factors It is also, by definition, difficult to parameterize because there are few observations of extreme events.

Let

${\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{1}}\textcolor[rgb]{}{,}{\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{2}}\textcolor[rgb]{}{\dots}{\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{n}}\textcolor[rgb]{}{}$be a sequence of independent and identically distributed random variables with cumulative distribution function F and let

${\textcolor[rgb]{}{M}}_{\textcolor[rgb]{}{n}}$=max(

${\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{1}}\textcolor[rgb]{}{,}{\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{2}}\textcolor[rgb]{}{\dots}{\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{n}}\textcolor[rgb]{}{)}\textcolor[rgb]{}{}$denote the maximum.

In theory, the exact distribution of the maximum can be derived:

$\textcolor[rgb]{}{\mathrm{Pr}}\left({\textcolor[rgb]{}{M}}_{\textcolor[rgb]{}{n}}\textcolor[rgb]{}{<}\textcolor[rgb]{}{Z}\right)\textcolor[rgb]{}{=}\textcolor[rgb]{}{\mathrm{Pr}}\left({\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{1}}\textcolor[rgb]{}{,}\textcolor[rgb]{}{\mathrm{}}{\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{2}}\textcolor[rgb]{}{\dots}{\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{n}}\textcolor[rgb]{}{}\textcolor[rgb]{}{Z}\right)$

$\textcolor[rgb]{}{=}\textcolor[rgb]{}{\mathrm{Pr}}\left({\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{1}}\textcolor[rgb]{}{<}\textcolor[rgb]{}{Z}\right)\textcolor[rgb]{}{\mathrm{Pr}}\left({\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{2}}\textcolor[rgb]{}{<}\textcolor[rgb]{}{Z}\right)\textcolor[rgb]{}{\dots}\textcolor[rgb]{}{\mathrm{Pr}}\left({\textcolor[rgb]{}{X}}_{\textcolor[rgb]{}{n}}\textcolor[rgb]{}{<}\textcolor[rgb]{}{Z}\right)$

$\textcolor[rgb]{}{=}{\textcolor[rgb]{}{\left(}\textcolor[rgb]{}{\mathrm{F}}\left(\textcolor[rgb]{}{\mathrm{z}}\right)\textcolor[rgb]{}{\right)}}^{\textcolor[rgb]{}{n}}$

The associated indicator function

${\textcolor[rgb]{}{}{\textcolor[rgb]{}{I}}_{\textcolor[rgb]{}{n}}\textcolor[rgb]{}{=}\textcolor[rgb]{}{I}\textcolor[rgb]{}{(}\textcolor[rgb]{}{M}}_{\textcolor[rgb]{}{n}}\textcolor[rgb]{}{}\textcolor[rgb]{}{Z}\textcolor[rgb]{}{)}$is a Bernoulli process with a success probability

${\textcolor[rgb]{}{\mathrm{p}}\left(\textcolor[rgb]{}{\mathrm{z}}\right)\textcolor[rgb]{}{=}\textcolor[rgb]{}{1}\textcolor[rgb]{}{\u2013}\textcolor[rgb]{}{\left(}\textcolor[rgb]{}{\mathrm{F}}\left(\textcolor[rgb]{}{\mathrm{z}}\right)\textcolor[rgb]{}{\right)}}^{\textcolor[rgb]{}{n}}$that depends on the magnitude z of the extreme event. The number of extreme events within n trials thus follows a binomial distribution and the number of trials until an event occurs follows a geometric distribution with expected value and standard deviation of the same order O(1/P(z)).

###
References

- Dai Bo, “Undergraduate Research Opportunity Programme in Science, Value at Risk”, National University of Singapore,2001
- Mária Bohdalová, Faculty of Management, Comenius University, Bratislava, Slovakia, “A comparison of Value–at–Risk methods for measurement of the financial risk1”, E-Leade, 2007

If you need assistance with writing your essay, our professional essay writing service is here to help!

Find out more#### Cite This Work

To export a reference to this article please select a referencing style below:

## Related Services

View all### DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: