ARMA-GARCH Model Theories Application
✅ Paper Type: Free Essay | ✅ Subject: Economics |
✅ Wordcount: 6433 words | ✅ Published: 4th Oct 2017 |
Chapter 3 Theoretical Properties
3.1 Introduction
In this chapter, we study some theoretical properties of our suggested models. More specifically, we discuss the conditions of the models’ stability, so that their Markov chain representation is geometrically ergodic, and more precisely Q-geometrically ergodic, which suggests that an initial distribution which renders our models strictly stationary and β-mixing exists. Moreover, we consider conditions required in order to be able to establish consistency and asymptotic normality for the estimator vector.
In addition, we present the multivariate versions of our suggested models, which, despite the fact that they are not used in empirical applications in this thesis, could be interesting at least at theoretical level.
3.2 Geometric ergodicity
There have been several studies of the stability of nonlinear autoregressive models with heteroskedastic errors. However, most of them have concentrated mainly on Threshold Autoregressive models for the conditional mean with conditional heteroskedasticity (e.g. Ling (1999), Liu, Li and Li (1999)) or on nonlinear autoregressions with ARCH-type, and not GARCH-type, errors (e.g. Liebscher (2005), Masry and Tjøstheim (1995)).
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Find out more about our Essay Writing Service
In this section we adopt the analysis of Meitz and Saikkonen (2006), who, following the analysis of Liebscher (2005), considered the case of a general nonlinear autoregressive model of order s with first-order generalised autoregressive conditional heteroskedastic (GARCH(1,1)) errors and provided the conditions of its stability, in the sense of geometric ergodicity, and specifically Q-geometric ergodicity. Since such a model is possible to be represented as a Markov chain, their analysis is based on the theory for Markov chains (for a detailed analysis of Markov chains and their stability theory, see Meyn and Tweedie (2008)).
It can be highlighted that their results are restricted to smooth nonlinear functions for the conditional mean and conditional variance models, in the sense that their derivatives exist and are continuous. This condition, which is not convenient in the case of e.g. Threshold Autoregressive or Threshold GARCH models, is convenient in the case of our models, as they all have an exponential term and not discontinuities, and hence they are all smooth. This fact justifies our choice to follow their analysis.
It should be noted, though, that the choice of first, and not higher, order of the GARCH model is due to the arduousness when establishing irreducibility of the Markov chain, a necessary property for when proving geometric ergodicity (Meitz and Saikkonen, 2006). However, their results hold not only for nonlinear autoregressive models with GARCH errors, but also in the case of nonlinear autoregressive models combined with any smooth nonlinear GARCH-type model for the conditional variance.
It is also worth-mentioning that the property of Q-geometric ergodicity is more useful than the property of geometric ergodicity, as the former does not only suggest that an initial distribution rendering the Markov chain strictly stationary and β-mixing exists, like the geometric ergodicity do, but suggests also the existence of some moments of the stationary distribution and that the β-mixing property holds for different nonstationary initial distributions as well (Meitz and Saikkonen, 2006). Consequently, limit theorems can be applied and an asymptotic theory can be established.
In order to establish the above results, though, it is required to transform our models, which were presented in chapter 2, to a Markov chain representation. This transformation is presented in the next subsection.
3.2.1 Markov chain representation
Let ,
be the stochastic process of our interest which is generated by
,
,
,
where denotes our nonlinear autoregressive process of
(e.g. ExpAR, Extended ExpAR, etc.) of order
,
is the conditional variance of
, which is specified as a GARCH(1,1) model by assumption, and
.
Meitz and Saikkonen (2006) showed that such a model is possible to be transformed to a Markov chain on as follows
where
and
where .
3.2.2 Q-geometric ergodicity
Next the definition of Q-geometric ergodicity for a Markov chain, as given by Meitz and Saikkonen (2006) and Liebscher (2005), is presented.
Definition: The Markov chain on
is Q-geometrically ergodic if there exists a function
, a probability measure
on
, and constants
,
, and
such that
and
for all
and all
,
where is the n-step transition probability measure of the Markov chain
defined on the Borel sets of
,
.
3.2.3 Assumptions
In this subsection there are presented the conditions which are sufficient when establishing Q-geometric ergodicity and the existence of certain moments for the respective Markov chain of our nonlinear autoregressive model with GARCH errors, as shown by Meitz and Saikkonen (2006) and discussed further by Chan, McAleer and Medeiros (2011). Most of the assumptions required apply to the conditional mean and conditional variance singly, which renders it easier when checking if the assumptions hold, and merely one assumption applies to both the conditional mean and conditional variance model.
Assumption 1: has a (Lebesgue) density which is positive and lower semicontinuous on
. Moreover, for some real
,
.
Assumption 2: The function is of the form
, where the functions
and
are bounded and smooth.
Assumption 3: Using the function from Assumption 2, set
and define the
matrix
.
Then there exists a matrix norm induced by a vector norm, such that
for all
, where
, and
.
One sufficient condition for Assumption 3 to hold is if or, equivalently, the roots of the characteristic polynomial
are inside the unit circle, where
(see Meitz and Saikkonen (2006), Lemma 1).
Assumption 4:
(a) The function is smooth and
for some
.
(b) For all ,
as
.
(c) There exists an such that the sequence
defined by
,
, converges to
as
for all
. If
for all
and all
it suffices that this convergence holds for all
.
(d) There exist ,
and a Borel measurable function
such that
for all
. Furthermore,
.
Assumption 5: Let the real number be as in Assumption 1 and
and
as in Assumption 4(d). Assume that either
(a) , or
(b) and
.
Assumption 6: For each initial value , there exists a control sequence
such that the
matrix
is nonsingular.
As can be easily seen from the above, Assumption 1 concerns the error term , Assumptions 2 and 3 concern only the conditional mean model, Assumptions 4 and 5 restrict only the conditional variance model, and only Assumption 6 concerns both the conditional mean and conditional variance. However, Meitz and Saikkonen (2006) argued that Assumption 6 is possible to be even checked by examining merely the conditional variance model.
The above Assumptions are required for the proof of the Q-geometric ergodicity of the Markov chain and under appropriate initial distributions the process is
-mixing and there exists a stationary initial distribution such that
and
have moments of orders
and
, respectively, if Assumption 5(a) holds (or
and
, respectively, with an unknown
, if Assumption 5(b) holds instead) (Meitz and Saikkonen (2006)).
3.2.4 Verifying the conditions
Now we will consider our suggested models and we will show that the above results apply to our suggested models as well. Since the first condition holds by assumption, it suffices to discuss only Assumptions 2-6
For the ExpAR model of order s, we have
,
where , and
is a smooth function with range
. Hence, this model satisfies Assumptions 2, as it can be written in the form of Assumption 2 and the function is bounded and smooth. A sufficient condition for Assumption 3 is
(Chen and Tsay (1993), Meitz and Saikkonen (2006)).
Regarding Assumptions 4-6, it is adequate to check only the conditional variance model. Obviously the ARCH and GARCH models consist of smooth functions.
If is as in Assumption 1 and
is generated by e.g. a GARCH(1,1) model, then either
, or
and
, in order for Assumptions 4-6 to hold (see Meitz and Saikkonen (2006) for proof).
Similarly, we can verify the conditions for our remaining suggested models.
3.3 Estimation
One proposed estimation procedure that could be used is maximum likelihood. Assuming that the sequence is identically normal distributed and conditioning on the observation at time
,
, the conditional log- likelihood function is
where
is the log-likelihood at time t, which means that the overall conditional log-likelihood function is
.
In the previous formulae should be replaced by
,
where is the parameter vector for the conditional mean model and
is our specified nonlinear function every time.
Nevertheless, since the distribution of is not known in reality,
can be estimated by the quasi-maximum likelihood (QML) method, in which case
has the same formula, but is not conditional on the true initial values
, rendering it more appropriate in practice.
3.4 Asymptotic theory
In this section we consider conditions required in order to be able to develop asymptotic theory for the estimator vector. Here we follow the analysis of Chan, McAleer and Medeiros (2011) who proved the existence, consistency and asymptotic normality of the QMLE of a general nonlinear autoregressive model with first order GARCH errors. Their results are also based on the fact that a general nonlinear autoregressive model with conditional heteroskedastic errors can be represented as a Markov chain.
3.4.1 Assumptions
In this subsection we present the conditions which are required for proving the existence, consistency and asymptotic normality of the QMLE, as given by Chan, McAleer and Medeiros (2011).
Assumption 7: The sequence of
random variables is drawn from a continuous (with respect to Lebesgue measure on the real line), unimodal, positive everywhere density, and bounded in a neighbourhood of
.
Assumption 8: The valued process
follows the following nonlinear autoregressive process with GARCH(1,1) errors (NAR-GARCH(1,1)):
,
,
,
where ,
is the vector of parameters of the conditional mean and
.
Assumption 9: The nonlinear function satisfy the following set of restrictions:
(i) is continuous in
and measurable in
.
(ii) is parameterised such that the parameters are well defined.
(iii) and
, are independent.
(iv) .
(v) .
(vi) .
(vii) .
If is the vector of the conditional variance parameters, we can set
.
Assumption 10: The true parameter vector is in the interior of
, a compact and convex parameter space, where
is the total number of parameters.
Under the assumptions of this subsection, the QMLE is strongly consistent for
,
, and if also there exists no set
of cardinal 2 such that
and
, then
,
where
with
and
.
For the proofs of the existence of the QMLE, its consistency and its asymptotic normality, see Chan, McAleer and Medeiros (2011).
3.4.2 Verifying the conditions
Here we shall show that the above results can also be applied to our suggested models.
Since conditions 8 and 10 hold by assumption, it suffices to check Assumption 9. For e.g. the ExpAR model of order with GARCH(1,1) errors
,
where , we have that obviously
is 9(i) continuous in
and measurable in
, 9(ii) parameterised such that the parameters are well defined, and 9(iii) is independent from
. In addition, since
is a smooth function with range
and as long as the model is stationary, Assumptions 9(iv)-9(vii) hold as well.
The conditions for our remaining suggested models can be verified accordingly.
3.5 Multivariate models
In this section the multivariate versions of our suggested models are presented. It should be noted that the following multivariate models are not used in this research, but are presented here as they could be interesting at least theoretically.
Let represent the
vector of the time series of our interest with time varying conditional covariance matrix
, i.e.,
where is the information set
at time
,
is an
unobservable zero mean white noise vector process (serially uncorrelated or independent), and
is almost surely (a.s.) positive definite for all
.
The conditional mean model of order takes the following form
where ,
, are
coefficient matrices.
For example, a bivariate Vector ExpAR(2) model equation by equation has the form
or
and
while the simplest case of a bivariate Vector Extended ExpAR(2) model equation by equation takes the following form
or
and
The remaining suggested models can be written accordingly.
Regarding the conditional variance model, if we assume time varying conditional variances and covariances, but constant conditional correlations, as seen in Bollerslev (1990) who modeled several European U.S. dollar exchange rates, we can write every conditional variance in the case of GARCH errors as
,
while in the case of ARCH errors we have
.
We can also write every conditional variance as follows
,
where is a positive time invariant constant and
Share this:
Facebook
Twitter
Reddit
LinkedIn
WhatsApp