Long-term forecasting of monthly mean reference evapotranspiration using deep neural network: A comparison of training strategies and approaches

Brain; Budget control; Convolution; Convolutional neural networks; Deep neural networks; Deterioration; Errors; Evapotranspiration; Forecasting; Mean square error; Multilayer neural networks; Convolutional neural network; Gated recurrent unit; Long-term forecasting; Memory network; Multiple inputs;...

Full description

Saved in:
Bibliographic Details
Main Authors: Chia M.Y., Huang Y.F., Koo C.H., Ng J.L., Ahmed A.N., El-Shafie A.
Other Authors: 57193957444
Format: Article
Published: Elsevier Ltd 2023
Tags: Add Tag
No Tags, Be the first to tag this record!
id my.uniten.dspace-26774
record_format dspace
spelling my.uniten.dspace-267742023-05-29T17:36:38Z Long-term forecasting of monthly mean reference evapotranspiration using deep neural network: A comparison of training strategies and approaches Chia M.Y. Huang Y.F. Koo C.H. Ng J.L. Ahmed A.N. El-Shafie A. 57193957444 55807263900 57204843657 57723346800 57214837520 16068189400 Brain; Budget control; Convolution; Convolutional neural networks; Deep neural networks; Deterioration; Errors; Evapotranspiration; Forecasting; Mean square error; Multilayer neural networks; Convolutional neural network; Gated recurrent unit; Long-term forecasting; Memory network; Multiple inputs; Multiple outputs; Network models; Reference evapotranspiration; Time horizons; Training strategy; Long short-term memory Prediction of reference evapotranspiration (ET0) remains a challenge, especially with forward multi-step forecasting. The bottleneck facing current research is the limitation of the span of the forecasting time horizons, which can be rather disappointing, especially when long-term forecasting is desired. In this study, an explainable model structure, represented by a one-dimensional convolutional neural network (CNN-1D) was compared to the long short-term memory network (LSTM) and gated recurrent unit network (GRU), both formulated with black-box model method. The comparison included the application of different forecasting strategies (iterated vs. multiple-input�multiple-output (MIMO)) and approaches (direct vs. indirect). This study was conducted at four stations scattered across the Peninsular Malaysia. From the results of this study, the explainable CNN-1D model generally performed poorer than its black-box counterparts at most of the stations. The type of model and its structure, forecasting strategy and approach formed a complex relationship to indicate that there is no one-for-all solution in the case of the long-term prediction of monthly mean ET0. Despite that, the GRU-based models stood out as the most well-suited option for the task, with the MIMO forecasting strategy being favoured over the iterated strategy. At the four stations, the average mean absolute error (MAE), root mean square error (RMSE), mean percentage error (MAPE) and the Kling�Gupta efficiency (KGE) of the best GRU models were 0.182 mm/day, 0.260 mm/day, 4.972 % and 0.747, respectively. It was found that the prediction residual of the best GRU models did not possess a clear trend as the forecasting horizon was lengthened. The results implied that theoretically, the forecasting time horizon could be extended over to a longer temporal scale without any deterioration in the model performance. This finding is positive as it brings about the possibility of allocating the water budget with higher confidence. Nevertheless, the LSTM and GRU models developed in this study, were believed to have more tremendous potential if they were to be designed with purpose (such as the integration of optimisation algorithm), instead of being a mere black-box structure. � 2022 Elsevier B.V. Final 2023-05-29T09:36:37Z 2023-05-29T09:36:37Z 2022 Article 10.1016/j.asoc.2022.109221 2-s2.0-85133621530 https://www.scopus.com/inward/record.uri?eid=2-s2.0-85133621530&doi=10.1016%2fj.asoc.2022.109221&partnerID=40&md5=3107bcd4c485e4dba973e3caddb5d7ee https://irepository.uniten.edu.my/handle/123456789/26774 126 109221 Elsevier Ltd Scopus
institution Universiti Tenaga Nasional
building UNITEN Library
collection Institutional Repository
continent Asia
country Malaysia
content_provider Universiti Tenaga Nasional
content_source UNITEN Institutional Repository
url_provider http://dspace.uniten.edu.my/
description Brain; Budget control; Convolution; Convolutional neural networks; Deep neural networks; Deterioration; Errors; Evapotranspiration; Forecasting; Mean square error; Multilayer neural networks; Convolutional neural network; Gated recurrent unit; Long-term forecasting; Memory network; Multiple inputs; Multiple outputs; Network models; Reference evapotranspiration; Time horizons; Training strategy; Long short-term memory
author2 57193957444
author_facet 57193957444
Chia M.Y.
Huang Y.F.
Koo C.H.
Ng J.L.
Ahmed A.N.
El-Shafie A.
format Article
author Chia M.Y.
Huang Y.F.
Koo C.H.
Ng J.L.
Ahmed A.N.
El-Shafie A.
spellingShingle Chia M.Y.
Huang Y.F.
Koo C.H.
Ng J.L.
Ahmed A.N.
El-Shafie A.
Long-term forecasting of monthly mean reference evapotranspiration using deep neural network: A comparison of training strategies and approaches
author_sort Chia M.Y.
title Long-term forecasting of monthly mean reference evapotranspiration using deep neural network: A comparison of training strategies and approaches
title_short Long-term forecasting of monthly mean reference evapotranspiration using deep neural network: A comparison of training strategies and approaches
title_full Long-term forecasting of monthly mean reference evapotranspiration using deep neural network: A comparison of training strategies and approaches
title_fullStr Long-term forecasting of monthly mean reference evapotranspiration using deep neural network: A comparison of training strategies and approaches
title_full_unstemmed Long-term forecasting of monthly mean reference evapotranspiration using deep neural network: A comparison of training strategies and approaches
title_sort long-term forecasting of monthly mean reference evapotranspiration using deep neural network: a comparison of training strategies and approaches
publisher Elsevier Ltd
publishDate 2023
_version_ 1806423560071151616
score 13.214268