论文标题

评估艺术状态,预测合奏 - 模型融合的元学习策略

Evaluating State of the Art, Forecasting Ensembles- and Meta-learning Strategies for Model Fusion

论文作者

Cawood, Pieter, van Zyl, Terence

论文摘要

杂交和集合学习技术是改善预测方法的预测能力的流行模型融合技术。通过有限的研究,将这两种有前途的方法结合起来,本文着重于不同集合的基础模型池中指数平滑的旋转神经网络(ES-RNN)的实用性。我们将某些最先进的结合技术和算术模型平均作为基准进行比较。我们对M4预测数据集的100,000时间序列进行了实验,结果表明,基于特征的预测模型平均(FFORFORA)平均是与ES-RNN的晚期数据融合的最佳技术。但是,考虑到M4的每日数据子集,堆叠是处理所有基本模型性能相似的情况下唯一成功的合奏。我们的实验结果表明,与N-Beats作为基准相比,我们获得了最终预测结果。我们得出的结论是,模型平均比模型选择和堆叠策略更强大。此外,结果表明,提高梯度对于实施集合学习策略是优越的。

Techniques of hybridisation and ensemble learning are popular model fusion techniques for improving the predictive power of forecasting methods. With limited research that instigates combining these two promising approaches, this paper focuses on the utility of the Exponential-Smoothing-Recurrent Neural Network (ES-RNN) in the pool of base models for different ensembles. We compare against some state of the art ensembling techniques and arithmetic model averaging as a benchmark. We experiment with the M4 forecasting data set of 100,000 time-series, and the results show that the Feature-based Forecast Model Averaging (FFORMA), on average, is the best technique for late data fusion with the ES-RNN. However, considering the M4's Daily subset of data, stacking was the only successful ensemble at dealing with the case where all base model performances are similar. Our experimental results indicate that we attain state of the art forecasting results compared to N-BEATS as a benchmark. We conclude that model averaging is a more robust ensemble than model selection and stacking strategies. Further, the results show that gradient boosting is superior for implementing ensemble learning strategies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源