论文标题
利用在线多资源神经网络用于云数据中心的主动自动化和节能VM分配框架
A proactive autoscaling and energy-efficient VM allocation framework using online multi-resource neural network for cloud data center
论文作者
论文摘要
这项工作提出了一个节能的资源供应和分配框架,以满足未来应用的动态需求。云用户资源需求的频繁变化导致了过量功耗,资源浪费,性能和服务质量退化的问题。提出的框架通过将应用程序的预测资源需求与VM的资源能力精确地匹配,从而解决了这些挑战,从而将整个负载巩固在最小能效的物理机器上。拟议工作的三个连续贡献是:在线多资源馈送前馈神经网络,以预测未来应用程序的多个资源需求;根据预测资源需求的聚类自动对VMS自动化;节能PM上缩放的VM的分配。综合方法连续优化了资源利用,节省能源并自动适应了未来应用程序资源需求的变化。通过使用基准Google群集数据集的真实工作负载轨迹来评估所提出的框架,并根据包括资源预测的能源有效VM位置(无需资源预测,无资源预测和自动实现的VMP)进行比较,并根据实际资源利用为基于自动增量的最佳VMP。观察到的结果表明,所提出的综合方法在最佳VMP方面取得了近乎最佳的性能,并且在节省和资源利用方面,其余的VMP均优于其余的VMP,分别高达88.5%和21.12%。此外,OM-FNN预测指标显示出比传统的单输入和单输出馈电神经网络预测器的时间和空间复杂性更低的时间和空间复杂性。
This work proposes an energy-efficient resource provisioning and allocation framework to meet the dynamic demands of future applications. The frequent variations in a cloud user's resource demand lead 'to the problem of excess power consumption, resource wastage, performance, and Quality-of-Service degradation. The proposed framework addresses these challenges by matching the application's predicted resource requirement with the resource capacity of VMs precisely and thereby consolidating the entire load on the minimum number of energy-efficient physical machines. The three consecutive contributions of the proposed work are: Online Multi-Resource Feed-forward Neural Network to forecast the multiple resource demands concurrently for future applications; autoscaling of VMs based on the clustering of the predicted resource requirements; allocation of the scaled VMs on the energy-efficient PMs. The integrated approach successively optimizes resource utilization, saves energy and automatically adapts to the changes in future application resource demand. The proposed framework is evaluated by using real workload traces of the benchmark Google Cluster Dataset and compared against different scenarios including energy-efficient VM placement with resource prediction only, VMP without resource prediction and autoscaling, and optimal VMP with autoscaling based on actual resource utilization. The observed results demonstrate that the proposed integrated approach achieves near-optimal performance against optimal VMP and outperforms rest of the VMPs in terms of power saving and resource utilization up to 88.5% and 21.12% respectively. In addition, the OM-FNN predictor shows better accuracy, lesser time and space complexity over a traditional single-input and single-output feed-forward neural network predictor.