# Hydrological Model Performance With Different Calibration Approaches Biology Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Due to its intricacy in representing a complex-non linear system, hydrological modelling requires some parameters adjustments. Once the model has been chosen, generally it is not possible to estimate those parameters by either measurement or observation.

For that reason, it is required some regulation of the model parameters values until the model output matches tolerably to the observed data. It is done in an indirect way by an inverse process (calibration) using historically observed input-output data.

When implementing a semi distributed hydrological model over a certain area, the great number of unknown results in a high dimensional parameter search space, considerably complicates the optimization issue. Thus, when this dimension becomes so large that the unknowns cannot be uniquely constrained by the data, the problem is said to be poorly conditioned.

In this way, this study evaluates some strategies which were applied in order to reduce the dimensionality of the calibrated parameters. Two of those strategies focused on a pre-calibration process of the slow components as the groundwater parameters. At the same time, other strategies were applied, such as lumped parameter, the implementation of one factor for each parameter group and clusterization of the groundwater parameters for sub-catchments with same characterization.

For the purpose of this study, the calibration processes run using PEST, an independent non-linear parameter estimator. The HEC-HMS model was used to generate the simulated flow hydrographs in order to compare with the observed data.

The entire procedure was applied to the Leineturm catchment, a southern sub-catchment of the Aller Leine catchment, located north of Germany.

The results suggested the model is not strongly sensitive to spatial parameter variability, thus a mean value may be represent well especially on the outlet. However, the clusterization of the groundwater parameters showed tiny better performance.

## Contents

List of Figures

III

List of Tables

IV

1. Introduction

1

2. Methodology

3

2.1 Data

3

2.2 Model

5

2.3 Calibration

8

2.4 Objective function and evaluation

13

3. Study area and database

16

4. Results

19

5. Discussion

27

References

28

Appendix

29

Declaration

32

## List of Figures

Figure 1.1: Hydrological cycle with global annual average water balance given in units relative to a value of 100 for the rate of precipitation on land

1

Figure 2.1: Sketch of the application of the inverse distance method

3

Figure 2.2: Disaggregating precipitation. (Left) Figure shows an example in hourly time steps. (Right) Figure shows the transformation of the daily time steps in hourly

4

Figure 2.3: Outline of the soil moisture accounting

6

Figure 2.4: Simulation (HEC-HMS) and calibration (PEST) process

9

Figure 2.5: Sketch of a catchment with lumped parameter

11

Figure 2.6: Sketch of a catchment with one factor parameter

12

Figure 2.7: Sketch of a catchment with cluster parameter

13

Figure 3.1: Digital Elevation Model (DEM) - Location of Leineturm as a sub-catchment of Aller Leine catchment

16

Figure 3.2: (Left) Precipitation gauges of Leineturm catchment. (Right) Net radiation and temperature gauges of Leineturm catchment

17

Figure 3.3: Discharge gauges of Leineturm catchment

17

Figure 4.1: Observed and simulated flow for the year 1998of the Leineturm gauge, using the two steps daily/hourly strategy

19

Figure 4.2: Precipitation event (left), potential evapotranspiration (middle), actual evapotranspiration (right)

20

Figure 4.3: Observed and simulated flow for the year 2007 using the two steps moving average strategy

21

Figure 4.4: Observed and simulated flow for the year 2007 of the Leineturm gauge, using the lumped strategy

23

Figure 4.5: Observed and simulated flow for the year 2007 of Mariengarten gauge, using the lumped strategy

23

Figure 4.6: Nash-Sutcliffe Ã- Failure volume of Leineturm gauge (left) and Mariengarten gauge

24

Figure 4.7: Fifteen highest independent peaks for the lumped strategy for Leineturm gauge

25

Figure 4.8: Fifteen highest independent peaks for the one factor strategy for Leineturm gauge

25

Figure 4.9: Fifteen highest independent peaks for the clusterization strategy for Leineturm gauge

25

Figure 4.10: Square correlation coefficient of Leineturm gauge for lumped (left), one factor (middle) and clusterization (right) strategies

26

## List of Tables

Table 2.1: Applied methods and their categories

10

Table 2.2: Applied methods and their parameter groups

13

Table 2.3: Parameter strategies applied and their time step

15

Table 3.1: Leineturm characteristics

16

Table 4.1: Calibration and validation of the different strategies

19

Table 4.2: Failure volume and Nash-Sutcliffe index of every gauge and every strategy

22

Table 4.3: Square correlation coefficient of Leineturm, Reckershausen and Mariengarten gauges for lumped, one factor and clusterization strategies

26

## 1. Introduction

Water is the most abundant compound on the globe, the main element of every living organism and responsible for shaping the surface of the earth. It is the central key for the climate features, giving conditions for the human subsistence and determining the development of the civilization (Chow et al., 1988).

Additionally, water is among the few substances that can be found in all three phases within the earth's climate range. In this context, according to Meyer (1917), hydrology is the science that treats of the phenomena of water in all its states, taking into account the distribution and occurrence of water in the earth's atmosphere, surface, soil and rock strata.

The hydrological cycle can be considered the central point for the hydrological studies. It consists in a conceptual form of how water moves between the earth and atmosphere in its three different phases (Davie, 2002). As shown in Figure 1.1, the cycle has no beginning and no end.

Figure 1.1: Hydrological cycle with global annual average water balance given in units relative to a value of 100 for the rate of precipitation on land (Chow et al., 1988)

Through evaporation of liquid water into vapour it becomes part of the atmosphere; the vapour is moved around until it condenses into a liquid (or solid) and fall down on the land or on the ocean; precipitated water may be intercepted by vegetation, become overland flow, infiltrates into the soil, turn into subsurface flow, and discharge into streams as surface runoff (Chow et al., 1988).

Due to the amount of variables and complexity that revolve the hydrological cycle, it becomes necessary the development of working Equations and hydrological models for a better understanding of the system.

Furthermore, hydrological modelling becomes essential for practical problems in water assessment as flood forecasting, design of engineered channels, assessing the impacts of effluents on water quality, predicting pollution incidents, and many other purposed (Beven, 2001).

The process of hydrological modelling can be defined as an approximation of the real system using a set of Equations to link the inputs and outputs (Chow et al., 1988). In other words, hydrological model, according to Penman (1961) (Singh et al., 2002) answers the question "What happens to the rain?".

In recent years, with the increasing demands on water resources, hydrological modelling has become an important tool. In addition, especially with advances in processing power of computers and consequently the development of improved models, the possibilities to obtain a closer realistic catchment description of the involved components are higher.

However, hydrological modelling is still uncertain due to its intricacy in representing a complex-non linear system. Once the model has been chosen, generally it is not possible to estimate the parameters of a model by either measurement or observation (Beven, 2001; Cunderlik & Simonovic, 2004).

For that reason, it is required an adjustment of the model parameters values until the model output matches tolerably to the observed data. It is done in an indirect way by an inverse process (calibration) using historically observed input-output data. However, some degree of calibration is normally inevitable in hydrological modeling (Pokhrel & Gupta, 2010).

When implementing a semi distributed hydrological model over a certain area, the great number of unknown results in a high dimensional parameter search space, considerably complicates the optimization issue. Thus, when this dimension becomes so large that the unknowns cannot be uniquely constrained by the data, the problem is said to be poorly conditioned (Pokhrel & Gupta, 2010).

For the aim of this study, some strategies were applied in order to better fit the objective function with the using of constraints that reduce the dimensionality of the parameter.

Two of those strategies focus on a pre-calibration process of the slow components as the groundwater parameters. At the same time, other strategies, such as lumped parameter, the implementation of one factor for each parameter group and clusterization of the groundwater parameters, were applied setting the parameters in group in order to reduce their dimension.

## 2. Methodology

## 2.1 Data

For the investigation of this study, interpolation methods were used due to the need to obtain an aerial variability of climate variables. Those methods are commonly used in estimation of variables which have an element of random variability in space with a degree of spatial correlation. For example, in case of rainfall estimation usually we have a number of observations data from few rain gauges and we wish to infer the distribution of rainfall over the catchment area (Sorooshian et al., 2008).

In this study, the interpolation methods applied were: inverse distance method (IDM), ordinary kriging (OK) and external drift kriging (EDK).

The IDM is based on the assumption that the data at any given point of the catchment area is influenced by the nearest stations, where each one is weighted by the inverse of the power of its distance to the point. To obtain the areal average, the method is applied by subdividing the area into m rectangular subareas (raster), each win an assumed uniform value as calculated for the point at the center, as illustrated in Figure 2.1 (Brutsaert, 2005).

1

2

3

4

5

6

7

8

9

10

11

12

13

14

d1,7

d2,7

d3,7

Figure 2.1: Sketch of the application of the inverse distance method (Adapted from Brutsaert, 2005)

Therefore, using the IDM the result is determined by the Equation 2.1 (Brutsaert, 2005).

(2.1)

Where:

P: areal precipitation

Aj: surface area of the jth raster cell;

A: total surface area of the catchment;

n: total number of stations;

dij: distance of the centre of the jth raster cell from the ith gauge;

b: constant, mostly used as 2.

Interpolation by kriging is commonly used in environmental data. In kriging, the weights are determined on the basis of the spatial variability of the data. Moreover, it is based on the dual criteria that the estimation error and the corresponding mean square error are minimal (Brutsaert, 2005).

The base of kriging uses the same proposal as IDM, where with a giving number of measured values zi at a specific location within area A, the value zr at a new location can be estimated as a weighted average of the zi, where Î»ir represents the weight attached to the ith observation, see Equation 2.2 (Sorooshian et al., 2008).

(2.2)

Therefore, the aim is to determine values for Î»ir, which give the optimal value for zr for any location, taking in into account the observed spatial correlation structure of the data (Sorooshian et al., 2008).

The OK, a non-stationary method, uses the assumption that the mean of the variable of interest is allowed to vary from place to place across the area, although it is assumed to be constant (Lloyd, 2007).

Finally, the EDK is used when a secondary variable is linearly related to the primary variable. In other words, the secondary data acts as a shape function, describing trends in the primary data (Lloyd, 2007).

In order to obtain a better resolution of the precipitation data, a disaggregation method was applied. It was made by using the frequency distribution of the hourly time steps into the daily time steps, as illustrates in the Figure 2.2.

## Precipitation

24h

## Time

## Precipitation

Transformation

## Precipitation

## Time

24h

Hourly

Daily

Figure 2.2: Disaggregating precipitation. (Left) Figure shows an example in hourly time steps. (Right) Figure shows the transformation of the daily time steps in hourly

Due to unavailability of measured data, the net radiation, which is defined by the input of radiation at the surface at any instant and also the major energy input for evaporation of water (Chow et al., 1988), was calculated using the following Equation (2.3) (DVWK-Merkblatt, 1996):

(2.3)

Where:

Rn: net radiation (J/cm2);

Î±: albedo;

RG: global radiation;

Ïƒ: Stefan-Boltzmann constant;

Tabs: absolute air temperature;

S: the daily sunshine duration;

S0: day length according to the latitude;

e: saturation vapour pressure as a function of the air temperature.

For the calculation of the crop coefficient (Kc), its values were based on land use characteristics. Kc changes according to the growth stage of the crop. Normally, the values of the Kc vary over a range of 0.2 â‰¤ Kc â‰¤ 1.3 (Chow et al., 1988).

## 2.2 Model

For the purpose of this study, the hydrologic modelling system used was HEC-HMS (version 3.3), developed by U.S. Army Corps of Engineers, which is included in a category of mathematical models or treated here as a conceptual model. In those models a set of Equations represent the response of a hydrologic system to a change in hydrometeorological conditions.

The conceptual model incorporated into HEC-HMS is physically based and describe how a catchment responds due to the precipitation falling down either directly on it or to upstream water flowing into it (US Army Corps of Engineers, 2000).

HEC-HMS includes a wide range of methods in order to simulate the catchment. Those methods can be classified as physical and meteorology. Table 2.1 shows the methods which were applied in this study.

Table 2.1: Applied methods and their categories

## Description

## Category

## Method

Physical

Runoff generation

Soil moisture accounting (SMA)

Direct-runoff

Clark's unit hydrograph

Baseflow

Linear reservoir

Routing

Muskingum

Meteorology

Evapotranspiration

Priestly-Taylor

Snowmelt

Temperature index

The SMA is defined as a continuous model, which simulates both wet and dry weather conditions. The model works simulating the movement and storage of water through vegetation, soil surface, soil profile and groundwater layers. The SMA model computes the catchment surface runoff, groundwater flow, losses due to evapotranspiration and deep percolation over every sub-catchment area with a given precipitation and evapotranspiration, see Figure 2.3 (US Army Corps of Engineers, 2000).

Figure 2.3: Outline of the soil moisture accounting (US Army Corps of Engineers, 2000)

As illustrated in the Figure 2.3, the SMA model represents the catchment with a series of storage layers. Canopy-interception storage layer represents the part of the precipitation which is captured by the vegetation and does not achieve the soil surface. Surface-depression storage is the volume of water held in shallow surface depression. Soil-profile storage represents water stored in the top layer of the soil. Finally, the groundwater (GW) storage layer, which represents the horizontal interflow (GW1) and the base flow (GW2) process (US Army Corps of Engineers, 2000).

In order to calculate the direct runoff with a unit hydrograph, which describes a simple linear model that can be used to derive the hydrograph resulting from any amount of excess rainfall, HEC-HMS uses a discrete representation in which a pulse of excess precipitation is known for each time interval (Chow et al., 1988; US Army Corps of Engineers, 2000).

The Clark unit hydrograph considers that two processes dominate the movement of flow through a catchment. The first is translation, defined by the downgradient movement of flow through the catchment due to gravity. The other is named attenuation, which is the frictional forces and channel storage effects that resist the flow (Straub et al., 2000).

Furthermore, together with the Clark unit hydrograph, the linear reservoir model represents the aggregated impacts of all catchment storage. The linear reservoir model transforms the rainfall excess to direct surface runoff. This model is based on the concept that a catchment behaves as a reservoir in which storage is linearly related to outflow (US Army Corps of Engineers, 1980; US Army Corps of Engineers, 2000).

The process used to determine the variation of flow rate for a flood wave as it moves through water reach in time and space is called hydrologic routing (Das & Saikia, 2009). For that purpose it was used the Muskingum flood routing method, which is based on the concepts of prism and wedge storage in a river reach under assumption that those storages can be treated as a linear relationship between the inflow and outflow. In this way, the prism storage is the volume defined by a steady-flow water surface profile, while the wedge storage is the extra volume under the profile of the flood wave. During flood events, the wedge storage is positive and then it is added to the prism storage. Unlike, during falling events of a flood, the wedge storage is negative and it is subtracted from the prism storage (US Army Corps of Engineers, 2000).

For the meteorology description of the catchment were used methods for evapotranspiration and snowmelt.

With the purpose to estimate the potential evapotranspiration EP, the Priestley Taylor method was applied. This method uses as instrument the following Equation (2.4) (Gardelin & Lindström, 1996):

-S)

(2.4)

Where:

Î±: Priestley-Taylor coefficient or dryness coefficient;

s: gradient of the saturated vapour pressure, which is a function of the air temperature;

Î³: psychrometric constant;

Rn net radiation, which comes from the global radiation;

S: soil heat flux.

After the estimation of the potential evapotranspiration, the actual evapotranspiration Et is calculated based on canopy and soil water balances (Chow et al., 1988).

Finally, the Temperature Index was used to estimate if the precipitation was fallen as a liquid or frozen form and as a result to calculate the snowmelt. The accumulation and melt of the snowpack is simulated in response to atmospheric conditions (US Army Corps of Engineers, 2008). The basic Equation (2.5) for the Temperature Index method is (US Army Corps of Engineers, 1998):

(2.5)

Where:

Ms: snowmelt [LT-1]

Cm: melt rate coefficient [degree/T];

Ta: air temperature [Î¸];

Tb: base temperature [Î¸].

## 2.3 Calibration

The calibration process tries to find out the optimal parameter values that minimizes the objective function or the goodness of fit (Cunderlik & Simonovic, 2004).

For the purpose of this study, the adjustment of the parameters (Table 2.2), or simply calibration, was realized automatically using PEST, an independent non-linear parameter estimator. PEST uses an optimization algorithm until some "best fit" parameter has been found (Beven, 2001).

Table 2.2: Applied methods and their parameter groups

## Method

## Parameter group

## SMA

Maximum Infiltration

Soil Storage

Tension Storage

Soil Percolation

Groundwater1 Storage

Groundwater1 Percolation

Groundwater1 Coefficient

Groundwater2 Storage

Groundwater2 Coefficient

## Clark's Unit Hydrograph

Time of Concentration

Storage Coefficient

## Linear Reservoir

Groundwater1 Coefficient

Groundwater2 Coefficient

## Muskingum

Travel Time

Weighting Factor

PEST can adjust the model parameter until the discrepancies between the results generated by the model and the corresponding measurements are reduced to minimum. It is done by taking control of the model and running it as many times as is necessary in order to determine the optimal set of parameters. The non-linear estimation technique used for this procedure is known as Grauss-Marquardt-Levenberg method (Doherty, 2004).

The Figure 2.4 illustrates the general process of a simulation model and PEST calibration as well as the interaction between both.

## SIMULATION MODEL

Model

input

HEC-HMS

Model

output

Model parameters

Optimisation algorithm

Objective function

## PEST (CALIBRATION)

Observations

New parameters

Figure 2.4: Simulation (HEC-HMS) and calibration (PEST) process

As HEC-HMS is a physically based model, the initial parameters estimated as a function of soil type, land use and digital elevation model (DEM). After that, PEST is provided within the set of parameters and then it is able to rewrite the model input data at any stage of the optimization process. Each time that PEST runs the model, it is able to read the model output until it fits in the best way to the measured data. When calculated the mismatch between the two sets of parameters, and evaluating the best way to correct it, PEST adjusts the model input data and runs the model again. This process is done by comparing parameters changes and objective function improvement achieved through the current iteration with those achieved in previous iterations, then PEST can notify whether it is worth undertaking other optimization iteration; if so the whole process is repeated (Doherty, 2004).

In a model with significant numbers of sub-catchments and numerous parameter groups in each of them, results in a large number of unknown results in a dimensional parameter search space, which makes complex the optimization problem.

When using a semi distributed model, like HEC-HMS, the number of sub-catchments Ns, with a certain amount of parameter group Np, gives a dimension ðœ“, as showed in Equation 2.6:

ðœ“ = Ns·Np

(2.6)

Thus, in order to handle this dimensional issue, some different strategies, as showed in Table 2.3, were applied and subsequently an evaluation was made in each one of them.

Table 2.3: Parameter strategies applied and their time step

## Strategy

## Time Step

Two steps - daily/hourly

Daily and hourly

Two steps - MAVG

Hourly

Lumped parameter

Hourly

One factor parameter

Clusterization

## a) Two steps - daily/hourly

In the two time steps strategy firstly the slow components (groundwater 2) are calibrated. As the slow components produce base flow, which has a smoother pattern, the calibrated groundwater 2 parameters were hold on and then all fast components were calibrated in a second calibration step.

The calibration using the two time steps strategy was realized firstly in daily time steps, due to longer availability of the data. After that, it was applied to the faster components in hourly time step. Due to the calibration in 2 steps the dimension of each calibration gets les.

## b) Two steps - moving average

The moving average (MAVG) was applied with the intention of reducing the dimension by using the average of the parameters. The two steps moving average consists firstly to get the mean of the parameters using the MAVG and then stars the calibration process.

The MAVG is based on the principle that the components of a time series show autocorrelation while the random fluctuations are not autocorrelated. Thus, the average of the neighboring measurements will eliminate the random fluctuations, with the remaining variation converging to a description of the system (McCuen, 1941). In other words, bringing it to a hydrological view, all fast components will be eliminated so that the parameters which have influence will be all from the slower components. The Equation 2.7 shows how the MAVG works.

(2.7)

Where:

m: number of observations

wj: weight applied to value j of the series.

The smooth interval is normally an odd integer, with 0.5(mâˆ’1) values of Y before the observation i and 0.5(mâˆ’1) values of Y after observation i used to estimate the smoothed value. In addition, the simplest weighting scheme would be the arithmetic mean (McCuen, 1941).

## c) Lumped Parameter

The lumped parameter lies on the idea of a category of models denominated as lumped model. Those models make the assumption that the entire area of the catchment has the same properties or in other words, there is no spatial parameter variation (Hangos & Cameron, 2001).

In this way, the whole study area is treated as only one catchment (without sub-catchments). For this reason, a new dimension becomes in a role; see Equation 2.8:

ðœ“ = Ns·Np â†’ ðœ“ = Np

(2.8)

For the calibration process, each parameter group has the same value for the total area of the catchment. In order to illustrate the strategy, let's consider Î¦i,j for the parameter value of a specific parameter group (i stands for the parameter group) and for a specific sub-catchment (j for the sub-catchment), Figure 2.5 shows a sketch of the lumped strategy.

Î¦1,3

Î¦1,2

Î¦1,4

Î¦1,1

Î¦1,1 = Î¦1,2= Î¦1,3= Î¦1,4

Sub-catchment border

Figure 2.5: Sketch of a catchment with lumped parameter

In order to consider the uniformity across the catchment and as a result a large reduction of the number of parameter, the values for the physically based initial parameters were averaged for each parameter group. Thus, for the calibration process, just one value for each parameter group is now available.

## d) One Factor

According to Davison (2003), due to the spatial averaging necessary for the parameters, the hypothesis of a lumped catchment brings some limitation in the ability to describe the catchment. In this way, a spatial distributed parameter assumption was adopted using one factor i for each parameter group.

This strategy is done in a way that the parameter values Î¦i,j of every parameter group is multiplied by one common factor i (Pokhrel & Gupta, 2010), generating new parameter values Î¦'i,j see Equation 2.9.

(2.9)

The factor i makes the assumption that the initial parameter set describes the spatial pattern. However, it must be taken into account the adjustment of the magnitudes of every parameter to achieve a better simulation of the model (Pokhrel & Gupta, 2010). In this case, for the factor I, depending on the parameter, was adopted a certain range

Therefore, the adoption of the factor for each parameter group brings exactly the same dimension as in the lumped strategy; see previously the Equation 2.8. Nevertheless, as mentioned before, the spatial variability is kept. Figure 2.6 illustrate how the one factor strategy works.

Î¦1,3

Î¦1,2

Î¦1,4

Î¦1,1

Sub-catchment border

·Î¦1,1; ·Î¦1,2; ·Î¦1,3; ·Î¦1,4

Figure 2.6: Sketch of a catchment with one factor parameter

## e) Clusterization

The last strategy of reducing the parameters dimension and at the same time to give spatial variability to the catchment was realized by clusterization of the groundwater1 parameters group Î¦gw1,j and groundwater2 parameters group Î¦gw2,j.

For the groundwater1, the clusters Ci were generated according to the physical characteristics of the sub-catchments. Whereas, for the groundwater2 parameters group a similar procedure was made. In this case, instead of using the physical characteristics of the sub-catchments to generate clusters, it was made by the position in space.

Bellow, Figure 2.7 demonstrates an example of the groundwater1 clusterization.

Î¦gw1,3

Î¦gw1,4,4

C1: Î¦gw1,1 and Î¦gw1,2

C2: Î¦gw1,3 and Î¦gw1,4

Î¦gw1,1

Î¦gw1,2

Sub-catchment border

Figure 2.7: Sketch of a catchment with cluster parameter

Therefore, according to Figure 2.7, the dimension ðœ“ is considerably reduced. In this way, the new dimension becomes; see equation (2.10).

(2.10)

Where:

: total number of parameters,

: number of groundwater parameters which are in clusters;

: number of clusters groundwater parameters.

In this way, comparing to the lumped strategy, for example, the parameter dimension using clusterization becomes larger, due to the fact that one factor is used for each cluster. However, if compared to the initial conditions (see Equation 2.6) it becomes much more reduced.

Furthermore, the values of the parameters of each cluster Ci were calculated multiplying the initial parameter values by one factor, like in the previous strategy.

After the calibration procedure, the validation process took place. The purpose of validation, which can be defined as an extension of the calibration, is to make sure that the calibrated model appropriately assesses all the parameters and conditions which can affect the model (Donigian, 2002). Furthermore, it proves that the model parameters are robust enough outside the calibration time series.

## 2.4 Objective function and evaluation

Once interfaced with the model, PEST aims to minimize the weighted sum of the square differences between the data generated by the model and the measured ones. This model and measurement discrepancies is referred to as the "objective function", see Equation 2.11 (Doherty, 2004).

(2.11)

Where:

: objective function;

: ith residual;

: weight pertaining to observation i.

The choice of a suitable measure is vital in a robust calibration process. The Nash-Sutcliffe efficiency index NSC is a widely used and potentially consistent statistic for assessing the goodness of fit of hydrological methods. This index measures the mean of the square error to the observed variance; see Equation 2.12 (McCuen et al., 2006).

(2.12)

Where:

Mi: model output,

Oi: observed values;

: mean observed value.

If the error is zero (NSC = 1), the model represents a perfect fit. However, if the error has the same magnitude as the observed variance (NSC = 0), the observed mean value is as good as a representation of the model, in this case the model represents poorly the reality (Wainwright & Mulligan, 2004).

In order to evaluate the consistency of the fitted model, the failure volume FV was analyzed. It is nothing else than the ratio between the observed and the simulated values, as shown in Equation 2.13.

(2.13)

Where:

Mi: model output,

Oi: observed values.

A positive FV indicates that the model consistently overestimate the measured value, while the negative means a consistent underestimation.

Finally, with the intention to compare the different adopted strategies, the focus was relied on the peaks of simulated and observed discharge using partial series. The peaks were selected due to a limit discharge Qs, which is a function the years of the series; see Equation 2.14 (Maniak, 2005):

## )

(2.14)

Where:

Qs: limit;

N: number of years of the series.

In this way, all independent peaks above Qs were analyzed in order to compare the data between observed and simulated.

The comparison between the fit of the simulated and observed peaks was made by using the square of the correlation coefficient R2, which shows the degree of linear regression between two random variables (McCuen, 1941).

## 3. Study area and database

For the purpose of this project, the study area investigated was the Leineturm catchment, a southern sub-catchment of the Aller Leine catchment, located north of Germany. The Figure 3.1 shows the location of the Leineturm catchment in a digital elevation model (DEM) with resolution of 10 m x 10 m.

## Hannoverâ-

## Leineturm

Figure 3.1: Digital Elevation Model (DEM) - Location of Leineturm as a sub-catchment of Aller Leine catchment

The general characteristics of the Leineturm catchment are presented in the Table 3.1.

Table 3.1: Leineturm characteristics

## Characteristic

## Value

Total area

990 km2

Latitude

51.5°

Longitude

10.0°

Mean temperature

8.5 °C

Mean precipitation

667 mm/a

The data as air temperature, precipitation and humidity, used for the development of this study, were acquired from Niedersächsischer Landesbetrieb für Wasserwirtschaft, Küsten- und Naturschutz - NLWKN (Lower Saxony Water Management, Coastal Defence and Nature Conservation Agency). The location of precipitation, net radiation and temperature gauges are illustrated in Figure 3.2. However, for precipitation, not all gauges were used in this study due to small time series.

Figure 3.2: (Left) Precipitation gauges of Leineturm catchment. (Right) Net radiation and temperature gauges of Leineturm catchment

For the aim to evaluate the performance of the entire process described previously in methodology, the Figure 3.3 shows the location of the discharge gauges (also from NLWKN) used for that purpose.

Figure 3.3: Discharge gauges of Leineturm catchment

For information about soil type it was obtained from BÜK 1000, a soil map of Germany in scale 1:1,000,000 produced by Bundesanstalt für Geowissenschaften und Rohstoffe - BGR (Federal Institute for Geosciences and Natural Resources).

The data of land use was acquired from the project CORINE Land Cover (CLC) 2006, a data set of land cover for Europe. In Germany the CLC was performed by the Deutschen Fernerkundungsdatenzentrum (German Remote Sensing Data Center), on behalf of the Auftrag des Umweltbundesamtes (Federal Environment Agency).

## 4. Results

In order to have an overview about the different strategies, Table 4.1 shows the periods which were used for calibration and validation processes.

Table 4.1: Calibration and validation of the different strategies

## Strategy

## Calibration period

## Validation period

Two time steps

no results

no results

Two steps MAVG

no results

no results

Lumped

2004, 2007, 2008

2005, 2006

One factor

2004, 2007, 2008

2005, 2006

Clusterization

2004, 2007, 2008

2005, 2006

Subsequent, the presentation of the results will follow the same order as presented in methodology.

## a) Two steps - daily/hourly

As a result, the comparison between the observed and simulated daily flow for the period January 1970 to December 1999 shows clearly a significant overestimation of the simulated data. For the year 1998 for the Leineturm gauge, placed in the outlet of the catchment, this behavior is pointed out in the Figure 4.1.

Figure 4.1: Observed and simulated flow for the year 1998of the Leineturm gauge, using the two steps daily/hourly strategy

The overestimation of the simulated data is explained due to a non-accounting of the actual evapotranspiration. As the actual evapotranspiration has not been considerate, this surplus eventually reflected in the flow.

For an unknown reason, this inaccuracy about the actual evapotranspiration has happened on occasions when there was rainfall. In a way to demonstrate that situation, the period from 5th of June, 1980 until 11th of June, 1980 was investigated.

At this time, it was observed an event of rainfall from 6th of June until 10th of June. Consequently, for the whole period investigated, the potential evapotranspiration was calculated. However, in the sequence of Figure 4.2 is clearly noted that for periods with rainfall, no evapotranspiration was taken into account.

Figure 4.2: Precipitation event (left), potential evapotranspiration (middle), actual evapotranspiration (right)

As mentioned before, this problem concerning the evapotranspiration has unknown causes. It may be a miscalculation of the model or some difficulty related to handling of the model. However, this issue is beyond the scope of this study.

Furthermore, concerning this problem, if the model is run in hourly time step, a day with rainfall does not necessarily means that each hour of the day was raining. Due to that, it would be accounted evapotranspiration in hourly time steps when there was no evapotranspiration in daily time steps. Thus, it is clear that the estimation parameters in daily time step cannot be transfer into a simulation in hourly time step, because the water balance is not described in the same way.

## b) Two steps - MAVG

The MAVG showed good result for the water balance over a specific time period. However, a problem was obtained concerning that the fluctuations with higher frequency were miss calibrated and part wise over and underestimated; see Figure 4.3.

Furthermore, the inaccuracy of the baseflow is clear. Thus, this situation has a direct native impact on the calibration of the groundwater2 parameters.

Figure 4.3: Observed and simulated flow for the year 2007 using the two steps moving average strategy

For this reason, the attempt to calibrate firstly the slow components, without looking at the fluctuation, and then in a second step to transfer it to the fast components, did not lead to a satisfactory result.

## c) Lumped, one factor and clusterization

Subsequently, the performance of the different strategies as lumped, one factor and clusterization, were compared to each other. Furthermore, the strategies were compared without calibration (initial parameters)

Table 4.2 (on follow page) shows the Nash-Sutcliffe index NSC and the failure volume FV for every gauge and each parameter strategy as well as the indexes of the initial parameters (without calibration).

Table 4.2: Failure volume and Nash-Sutcliffe index of every gauge and every strategy

## Gauge

## Area

## [km²]

## Calibration

## Validation

## FV

## [%]

## NSC

## FV

## [%]

## NSC

## Lumped

Leineturm

990

1.20

0.82

-1.65

0.61

Göttingen

633

1.86

0.77

5.71

0.60

Reckershausen

321

3.81

0.64

8.36

0.52

Gartemühle

86.3

-21.65

0.59

-27.77

0.39

Mariengarten

45.2

37.41

-0.23

42.41

-1.04

## 1 Factor

Leineturm

990

3.81

0.83

-11.54

0.76

Göttingen

633

3.03

0.83

-5.11

0.74

Reckershausen

321

1.30

0.62

-4.21

0.62

Gartemühle

86.3

-32.36

0.38

-68.58

0.20

Mariengarten

45.2

51.57

-2.52

51.23

-3.98

## Clusterization

Leineturm

990

2.21

0.87

-7.22

0.75

Göttingen

633

0.35

0.82

1.53

0.63

Reckershausen

321

-2.05

0.57

3.04

0.48

Gartemühle

86.3

-48.81

0.04

-56.28

0.11

Mariengarten

45.2

55.63

-4.96

53.85

-8.59

## Without calibration

Leineturm

990

5.37

0.51

-4.45

0.51

Göttingen

633

4.95

0.40

0.91

0.44

Reckershausen

321

1.97

-0.03

-0.41

0.26

Gartemühle

86.3

-28.37

0.26

-49.10

0.27

Mariengarten

45.2

5.11

-5.57

53.38

-8.54

First of all, all strategies obtained a clear improvement in model calibration if compared without calibration (uncalibrated model).

The Mariengarten gauge has showed very poor performance level for every strategy, demonstrating unreasonable NSC values. In contrast, the Göttingen and Leineturm gauges showed good performances.

The Figure 4.4 and Figure 4.5 show clearly the difference in performance between the Leineturm and Mariengarten gauges. It is evident the best fit of Leineturm gauge comparing the observed and simulated data.

Figure 4.4: Observed and simulated flow for the year 2007 of the Leineturm gauge, using the lumped strategy

Figure 4.5: Observed and simulated flow for the year 2007 of Mariengarten gauge, using the lumped strategy

However, the discrepancy level becomes smaller when the gauge receives a higher volume of runoff contribution. This might be a dependence of the structure of the model or on the calibration the gauge with the biggest area also received the biggest weight.

For a better understanding, once more a comparison between a fine performed gauge, as Leineturm, and the poorest performed, Mariengarten, following Figure 4.6 shows in one axis the FV and the NSC index in the other axis.

## Validation

## Calibration

Figure 4.6: Nash-Sutcliffe Ã- Failure volume of Leineturm (left) and Mariengarten (right) gauges

According to Figure 4.6, is evident the better fulfillment of Leineturm comparing to Mariengarten.

For the gauge Leineturm, considering the calibration process, the NSC index got values higher than 0.8 for all calibration strategies. In addition, the failure volume shows a modest overestimation for the calibration and an underestimation for the validation step.

On the other hand, for the gauge Mariengarten, almost for all strategies, both for calibration and validation, the NSC index was less than zero, which means that the average of the measured values represents better than the simulated series. For the failure volume, the evidence is strong that there was a high overestimation, with values around 48%, for both validation and calibration.

Still, on the Mariengarten gauge can be observed for the clusterization strategy for the validation process, the FV and NSC index obtained a result which does not fit the others ones. However, this surly is by chance.

In order to compare the performance of the different strategies for high discharges, a selection of the fifteen highest independent peaks was made. For instance, the gauge Leineturm was selected for that purpose (for the gauges Reckershausen and Mariengarten see Appendix II). Bellow, the sequence of Figures (Figure 4.7, Figure 4.8 and Figure 4.9) shows for every strategy the observed and the simulated peaks for discharge. Nevertheless, it is important to consider that the period of five years is somewhat short for deeper statistical analysis.

Figure 4.7: Fifteen highest independent peaks for the lumped strategy for Leineturm gauge

Figure 4.8: Fifteen highest independent peaks for the one factor strategy for Leineturm gauge

Figure 4.9: Fifteen highest independent peaks for the clusterization strategy for Leineturm gauge

When comparing the above graphics the first three lowest peaks have basically the same behavior, with a good fit between observed and simulated values. However, when the highest peak is compared, the lumped strategy showed the best result.

Despite the difference of the highest peak comparing the three strategies, the other peaks almost have the same pattern. In this way, in order to compare the fit of the strategies, the square correlation coefficient R2 was used, as showed bellow in Figure 4.10.

Figure 4.10: Square correlation coefficient of Leineturm gauge for lumped (left), one factor (middle) and clusterization (right) strategies

Due to the low number of sample, the difference among the R2 for all strategies was somewhat small. However, the lumped and clusterization strategies obtained a better coefficient.

In order to make a comparison between the selected gauges as Leineturm, Reckershausen and Mariengarten, the Table 4.3, shows the R2 coefficient for all strategies.

Table 4.3: Square correlation coefficient of Leineturm, Reckershausen and Mariengarten gauges for lumped, one factor and clusterization strategies

## Strategy

## Gauges

## Leineturm

## Reckershausen

## Mariengarten

Lumped

0.987

0.868

0.988

One factor

0.991

0.940

0.905

Clusterization

0.988

0.955

0.978

Considering the average of R2 the clusterization strategy obtained the best result. However, the difference among them is not that apparent.

Furthermore, the Mariengarten gauge, which before obtained bad performance for the entire series, at this time the performance was outwardly good. It means if the objective of analyze is for example floods, where the high peaks are the focus, the Mariengarten gauge could be used for that purpose.

## 5. Discussion

First of all, before doing any comment concerning the results, it is important to emphasize once more that the five years period used for the purpose of this investigation it is not sufficient to give an explicit conclusion about the parameter strategies, when extremes values are focused. However, it is obvious that as longer the time series as more robust the evaluation is.

Anyhow, except for the two steps - daily/hourly and the two steps - moving average strategies, where the results were not satisfactory in one case and not to apply in the other case, the other parameter strategies obtained a regular improvement in model calibration in compare to the uncalibrated model.

Although the difference between the strategies is not so evident, the clusterization of the groundwater parameter groups obtained a tiny better performance comparing to the others. It might be due to the given parameter flexibility, once the estimation of the initial groundwater parameters are not that certain.

Furthermore, the results suggest that calibration using parameters with or without spatial variability gave nearly equivalent performance.

Despite neglecting the spatial variability of the parameters, the lumped strategy obtained results clearly comparable to the others.

In addition, this response showed that the model is not powerfully sensitive to spatial parameter variability, thus substantially a mean parameter value performs well.

However, with those results we cannot affirm that the spatial parameter variability is not significant.

Finally, this study explored different techniques with the purpose to reduce the parameter dimensions when dealing with calibration. However, more ways need to be investigated in order to achieve good performances of parameter calibration.