Performance Measurement on Deep Spiking Neural Network (DSNN) Algorithm in Flood Prediction Environment

There are several algorithms used to predict floods, including LSTM, BP, MLP, SARIMA, and SVM. While shallow neural networks are simple and efficient, they have limited memory and may not accurately capture long-term patterns or large-scale data. LSTM has gained attention among researchers in flood...

Full description

Saved in:
Bibliographic Details
Main Author: Roselind, Tei
Format: Thesis
Language:English
English
English
Published: UNIMAS 2023
Subjects:
Online Access:http://ir.unimas.my/id/eprint/42682/3/DSVA_Roselind%20Tei.pdf
http://ir.unimas.my/id/eprint/42682/4/Thesis%20Mastera_Roselind%20Tei.fulllftext.pdf
http://ir.unimas.my/id/eprint/42682/5/Msc._Roselind%20Tei%20-24%20pages.pdf
http://ir.unimas.my/id/eprint/42682/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:There are several algorithms used to predict floods, including LSTM, BP, MLP, SARIMA, and SVM. While shallow neural networks are simple and efficient, they have limited memory and may not accurately capture long-term patterns or large-scale data. LSTM has gained attention among researchers in flood prediction for its ability to preserve historical data and solve complex time series problems. However, the study of this area is ongoing, with potential for further improvement. In current studies, researchers are exploring new directions by developing hybrid algorithms. The SNN, a third generation ANN, has been created to handle more complex data with a higher decision-making firing rate than ML and DL.In this study, a new hybrid DSNN algorithm will be utilised to predict floods in KualaBaram, Miri, Sarawak. Rainfall data from 30 years (1989-2019) was collected from DID to evaluate the effectiveness of the DSNN algorithm compared to traditional and shallow neural networks algorithms. Performance was measured using ACC, RMSE, SPE, SEN, PPV, NPV, and ASP. A comprehensive analysis of the proposed DSNN algorithm was conducted. Four different training batch ratios were used to validate: 80:10:10, 70:15:15, 60:20:20, and 50:25:25. The results of the study showed that the DSNN algorithm outperformed the other algorithms with a higher ACC rate of 98.10%, an RMSE of 6.5%, a SEN of 93.50%, an SPE rate of 79.00%, and ASP of 89.60%. Overall, the DSNN algorithm with an 80:10:10 training sample ratio performed the best.