Skip to content

Models ‐ Hybrid model

Jesús Casado Rodríguez edited this page Sep 26, 2024 · 1 revision

1 Introduction

This document explains the development of the calibration of the reservoir routines using a parameter learning framework. At the moment, I have only tested this approach with the linear reservoir, to simplify the implementation as it is the easiest routine.

2 Methods

2.1 Hybrid model

The library NeuralHydrology contains a specific class (HybridModel) designed to apply parameter learning on a conceptual model. This class combines a deep learning model with a conceptual model implemented in PyTorch. The deep learning model consists of a LSTM and a fully-connected layer and has the objective of estimating the parameters of the conceptual model. The conceputal model is in our case the reservoir routine; it uses the inflow time series and the estimated model parameters to predict the reservoir behaviour. The scheme below shows the general structure: LSTM, FC and conceptual.

Figure 1. Scheme of the hybrid model. $X_d$ and $X_s$ are, respectively, the dynamic and static inputs to the parameter estimation part of the hybrid model. $X_{d,c}$ are the dynamic inputs to the conceptual model. $\hat{y}$ is the output time series. The brackets represent the dimensions of the tensors.

Dimensions in Figure 1:
$batch$: batch size, i.e., number of days to be predicted in parallel
$seq$: the amount of past days used to predict one day
$n_d$: number of dynamic inputs
$n_s$: number of static inputs
$hidden$: number of LSTM cells
$n_{d,c}$: number of dynamic inputs to the conceptual model
$n_{par}$: number of parameters in the conceptual model
$wu$: number of days used to warm up the internal states of the conceptual model
$n_{target}$: number of target variables

2.1.1 Conceptual model

The PyTorch version of the conceptual model needs to be a child of the class BaseConceptualModel. Three methods need to be defined in the conceptual model class:

  • forward: the function that simulates the reservoir behaviour. All the samples in a batch are processed in parallel. Each sample represents the prediction of one day, which in the model need to be independent to be parallelized. However, in reality, the prediction of day $t$ affects the prediction of day $t+1$ by changing the storage available for the day after. To work around this limitation, the function implements a loop that warms up the reservoir storage from a fixed initial state (see below) over a predefined number of days (warmup_period).

  • initial_states: defines the initial conditions of the state variables in the model. In the case of reservoirs, the only state variable is the fraction filled ($\text{ff}$), which I define as 0.67 using the GloFAS default value. This fixed initial state is altered in the warmup loop in the forward method to define the reservoir storage at the prediction day.

  • parameter_ranges: a dictionary that specifies the model parameters and the search range during training. The parent class BaseConceptualModel includes a method called _get_dynamic_parameters_conceptual that uses this range to rescale the output of the LSTM+FC network ($FC_{out} in Figure 1) using a sigmoid function.

So far, I have only implemented the linear reservoir routine in three different flavours that differ in the output variables (outflow, storage or both of them) so I can train the model to different target variables.

3 Data

I am using what I have called version 3 of the dataset ResOpsUS. In this version I have added timeseries of catchment meteorology taken from GloFAS inputs (average temperature, precipitation and evatranspiration), which I will use as dynamic inputs to the parameter estimator. In this version all time series but temperature, include both the original and normalized values. In the case of variables related to water volumes (precipitation, evapotranspiration, inflow, storage and outflow), this normalization is based in the reservoir storage capacity, so all normalized variables refer to fractions of that capacity. In the case of temporal variables (month, day of the year, week of the year, day of the week...), the normalization creates two variables: the sinus and cosinus.

Note. The deep learning model does an extra normalization of the inputs to make sure that the values are in the same scale. The default normalization uses the mean and standard deviation. I wonder how strictly positive, right skewed variables such as precipitation, inflow or outflow behave. Would it be benefitial to apply a logarithmic or square root transformation to normalise those variables?

I am using data for 118 reservoirs for which I have at least 4 years of records of inflow, storage and outflow. 85 reservoirs were used for training and 33 for validation; I did not set apart a test sample. I do not apply temporal validation.

3.1 Inputs

The model requires three types of inputs. The deep learning part requires both dynamic inputs (time series) and static inputs (reservoir and dam attributes). Besides, the conceptual model requires its own dynamic inputs, which in the case of reservoirs is the inflow (normalised inflow in reality).

3.1.1 Dynamic inputs

After several tests, these are the dynamic inputs to the parameter estimator:

  • Meteorology. I have tested using non-normalised variables, but the performance deteriorated.
    • Normalised areal evapotranspiration
    • Normalised areal precipitation
    • Areal temperature
  • Temporal. In all cases I include both the sinus and cosinus. I have done tests removing these variables, as they are hemisphere-specific (the location of the seasons over the year differ). However, the performance deteriorated.
    • Month
    • Week of the year
    • Day of the year
    • Day of the week

3.1.2 Static inputs

As static inputs I am using only GRanD attributes:

  • Longitude, latitude and elevation
  • Dam height and length
  • Reservoir area, volume and depth
  • Catchment area
  • Average inflow
  • Degree of regulation
  • Use:
    • Electricity
    • Flood control
    • Irrigation
    • Navigation
    • Water supply
    • Single use

3.1.2 Input conceptual model

The only input to the conceptual model is the time series of normalised inflow. To be able to model reservoirs with different capacities, the internal state in the conceptual model cannot be storage, but fraction filled. As the state variable is normalised, so needs to be both the input.

3.2 Outputs

I have tested two different outputs: normalised outflow or normalised storage.

4 Results

The results I present below correspond to a model that targets reservoir storage, as this target variables seems to be a more sensitive target. The best performing model uses the following hyperparameters

  • batch size: 512
  • epochs: 5
  • hidden size: 256
  • learning rate: 0.01
  • loss: NSE
  • optimizer: AdamW
  • output activation: linear
  • output dropout: 0.4
  • sequence length: 365
  • target noise standard deviation: 0.005
  • warmup period: 0. It means that the warmup period is as long as the sequence length (1 year)

I have performed multiple trainings varying one of these hyperparameters at a time. Particularly I have focused on reducing overfitting by changing the dropout rate, clipping gradients, reducing the batch size, reducing the complexity of the model with a smaller hidden size, increase the noise in the target variable...

4.1 Performance

Figure 2 shows the performance of the training and validation sets over the epochs. The best epoch with the highest median KGE in the validation set is the fifth (KGE=0.242). This epoch has a median KGE of 0.845 in the training set, which denotes overfitting. As long as the training proceeds, the performance in the training set improves both in terms of median value and the dispersion. On the contrary, the performance of the validation set barely improves, which indicates that the model is not transferring well to the validation catchments.

Figure 2. Evolution of performance over the training epochs in the train and validation sets.

Figure 3 shows the location of the reservoirs and their performance. It is clear that the majority of the bad performing reservoirs are in the validation set, but there is not a clear geographical pattern.

Figure 3. Geographical distribution of the model performance for the best epoch. Circles represent reservoirs in the training set, and triangles those in the validation set.

4.2 Parameters

The deep learning part of the hybrid model estimates a parameter value for every sample, i.e., every day in the records. It means that the model parameter is not a constant, but a time series. This poses a challenge, as parameters are supposed to be constants in a conceptual model.

Figure 4 aims at illustrating this fact. It compares the simulated time series with the inputs/observed timeseries. The top panel compares the estimated parameter value (residence time) agains the dynamic inputs, just to get an intuition of the influence of the dynamic inputs in the variation of the estimated parameters. Since this is a version of the conceptual model that targets storage, it does not generate an outflow time series, that is why the central panel does not include.

Figure 4. Comparison of the model simulation and observations for reservoir 372. Top panel for the model parameter residence time ($T$), central panel for outflow and lower panel for storage.

The residence time parameter shows a clear seasonal behaviour. The model generates larger residence times in the time of the year when the reservoir stores water, and smaller residence times when it needs to release water. The question is, if we need to find a fixed value for this parameter, what would it be? The mean? The median? To get rid of extreme values (I will talk about it later), I have used the mean.

The estimated residence time shows spikes at even distance. I have analysed this issue and found out that these spikes are exactely located in the first sample of every batch. In this example, the batch size is 524, so every 524 days there is an outlier in the residence time. Of course, this extreme parameter value affects the storage simulation, that also exhibits these peaks. I still do not understand why this is happening.

The variation of the estimated model parameter with time occurs in all reservoirs. Figure 5 are boxplots of the time series of the model parameter for the 33 reservoirs in the validation set. For comparison, the black dot represents the degree of regulation (in days) reported by GRanD, as a proxy of residence time.

Figure 5. Comparison of the model simulation and observations for reservoir 372. Top panel for the model parameter residence time ($T$), central panel for outflow and lower panel for storage.

If we consider the median value as the representative value of the model parameter, the estimation renders values in the lower half of the search range. These values differ significantly from those reported by GRanD (which need to be taken with a pinch of salt because they are based on a WaterGAP simulation). Apart from that, some reservoirs exhibit a very large variation in the parameter value.