Publications internationales
Résumé: This study evaluates the performance of various linear and nonlinear time series models for simulating monthly precipitation at 12 stations in Algeria. We compare hybrid ARMA–ARCH/GARCH models (CARMA-ARCH, CARMA-GARCH), their optimally tuned versions, and Bayesian-optimized variants, as well as two bilinear hybrid models (BL-ARCH and BL-GARCH). This is the first work to integrate and compare this specific suite of models for precipitation forecasting. Two scenarios were considered: (1) a ‘uniform’ scenario using the same inputs (temperature at lags 0 and 1) for all stations, and (2) a ‘station-specific’ scenario allowing additional lagged inputs (e.g. temperature lag 6, precipitation lags) chosen per station. The results showed that in the first scenario, where model inputs were constant across all stations (temperature with lags 0 and 1), the simple hybrid models and their optimal versions performed well in precipitation simulation, yielding an average error of 17 mm. Under the fixed scenario (Scenario 1), the Bilinear-based models exhibited the weakest performance. In the second scenario, where both the number of inputs increased (including precipitation with various lags in addition to temperature and its lags) and input variability differed across stations, the rainfall simulation results for the studied stations showed that the average error decreased by 12 mm compared to the first scenario. The findings revealed that in the second scenario, With the addition of temperature at lags 3 and 6 and precipitation at lag 6, the top-performing models shifted toward BL-based models. Although the performance of all examined models was acceptable, BL-based models demonstrated superior performance compared to others due to their ability to better handle increased fluctuations caused by precipitation lags. Additionally, the results indicated that Bayesian optimization did not improve the performance of the base models despite increasing their complexity. Our findings provide a framework for selecting model complexity based on data availability, which can enhance forecasting accuracy.
Résumé: This study evaluates the performance of various linear and nonlinear time series models for simulating monthly precipitation at 12 stations in Algeria. We compare hybrid ARMA–ARCH/GARCH models (CARMA-ARCH, CARMA-GARCH), their optimally tuned versions, and Bayesian-optimized variants, as well as two bilinear hybrid models (BL-ARCH and BL-GARCH). This is the first work to integrate and compare this specific suite of models for precipitation forecasting. Two scenarios were considered: (1) a ‘uniform’ scenario using the same inputs (temperature at lags 0 and 1) for all stations, and (2) a ‘station-specific’ scenario allowing additional lagged inputs (e.g. temperature lag 6, precipitation lags) chosen per station. The results showed that in the first scenario, where model inputs were constant across all stations (temperature with lags 0 and 1), the simple hybrid models and their optimal versions performed well in precipitation simulation, yielding an average error of 17 mm. Under the fixed scenario (Scenario 1), the Bilinear-based models exhibited the weakest performance. In the second scenario, where both the number of inputs increased (including precipitation with various lags in addition to temperature and its lags) and input variability differed across stations, the rainfall simulation results for the studied stations showed that the average error decreased by 12 mm compared to the first scenario. The findings revealed that in the second scenario, With the addition of temperature at lags 3 and 6 and precipitation at lag 6, the top-performing models shifted toward BL-based models. Although the performance of all examined models was acceptable, BL-based models demonstrated superior performance compared to others due to their ability to better handle increased fluctuations caused by precipitation lags. Additionally, the results indicated that Bayesian optimization did not improve the performance of the base models despite increasing their complexity. Our findings provide a framework for selecting model complexity based on data availability, which can enhance forecasting accuracy.
Résumé: Accurately estimating evaporation in reservoir systems is an essential step in creating a water budget, as this information is crucial for the effective management of water resources, particularly in countries experiencing water stress. This investigation aims to test the success of tree-based (random forest, gradient boosting, extreme gradient boosting, adaptive boosting, and M5 prime) and neural network-based (multi-layer perceptron (MLP), Kolmogorov Arnold network (KAN), recurrent neural network, long short-term memory, and gated recurrent unit) methods, to estimation monthly evaporation at very important reservoir called Boukourdane Dam, which is located in a Mediterranean area in Algerian north. The KAN method was used for the first time in evaporation prediction. Data on minimum and maximum temperatures (Tmax, °C, Tmin, °C), wind speed (U, km/h), and relative humidity (H, %) between 1996 and 2016 were used as inputs to the models. Using lag values of the input data significantly increased the accuracy of the models. Although the applied machine learning models generally gave higher accuracy in predicting evaporation, neural network-based methods gave better results than tree-based ones. Although neural network-based methods give close results to each other, the MLP is the method that produces the best results for the test set. The most significant advantage of the KAN method, which consistently produces satisfactory results, is that it provides a clear and straightforward equation. Explainable artificial intelligence graphs showed that Tmax is the most effective parameter in evaporation estimation. The study results will provide convenience to decision-makers for efficient dam operation.