...
首页> 外文期刊>Neurocomputing >A proactive autoscaling and energy-efficient VM allocation framework using online multi-resource neural network for cloud data center
【24h】

A proactive autoscaling and energy-efficient VM allocation framework using online multi-resource neural network for cloud data center

机译:使用在线多资源神经网络进行云数据中心的主动自动播放和节能VM分配框架

获取原文
获取原文并翻译 | 示例
           

摘要

This work proposes an energy-efficient resource provisioning and allocation framework to meet dynamic demands of the future applications. The frequent variations in a cloud user's resource demand leads to the problem of an excess power consumption, resource wastage, performance and Quality-of-Service (QoS) degradation. The proposed framework addresses these challenges by matching the application's predicted resource requirement with resource capacity of VMs precisely and thereby consolidating entire load on the minimum number of energy-efficient physical machines (PMs). The three consecutive contributions of the proposed work are: (1) Online Multi-Resource Feed-forward Neural Network (OM-FNN) to forecast the multiple resource demands concurrently for the future applications, (2) autoscaling of VMs based on the clustering of the predicted resource requirements, (3) allocation of the scaled VMs on the energy-efficient PMs. The integrated approach successively optimizes resource utilization, saves energy and automatically adapts to the changes in future application resource demand. The proposed framework is evaluated by using real workload traces of the benchmark Google Cluster Dataset and compared against different scenarios including energy-efficient VM placement (VMP) with resource prediction only, VMP without resource prediction and autoscaling, and optimal VMP with autoscaling based on actual resource utilization. The observed results demonstrate that the proposed integrated approach achieves near-optimal performance against optimal VMP and outperforms rest of the VMPs in terms of power saving and resource utilization up to 88.5% and 21.12% respectively. In addition, OM-FNN predictor shows better accuracy, lesser time and space complexity over a traditional single-input and single-output feed-forward neural network (SISO-FNN) predictor. (C) 2020 Elsevier B.V. All rights reserved.
机译:这项工作提出了节能资源供应和分配框架,以满足未来应用的动态需求。云用户资源需求的频繁变化导致过量的功耗,资源浪费,性能和服务质量(QoS)降级问题。所提出的框架通过精确地将应用程序的预测资源需求与VM的资源容量匹配,从而整合到最小节能物理机(PMS)的整个负载来解决这些挑战。拟议工作的连续三个贡献是:(1)在线多资源前锋神经网络(OM-FNN),以预测未来应用程序同时的多重资源需求,(2)基于群集的VMS自动播放预测资源要求(3)在节能PM上分配缩放VM。集成方法连续优化资源利用率,节省能源,并自动适应未来应用资源需求的变化。通过使用基准Google群集数据集的实际工作负载迹线进行评估,并将其与不同的场景进行比较,包括具有资源预测的节能VM放置(VMP),VMP没有资源预测和自动播放,以及基于实际自动播放的最佳VMP资源利用率。所观察到的结果表明,拟议的综合方法在节能和资源利用率方面,分别达到最佳VMP的近似最佳性能,越优越地实现了VMP的剩余效果,高达88.5%和21.12%。此外,OM-FNN预测器在传统的单输入和单输出前馈神经网络(SISO-FNN)预测器上显示出更好的精度,更小的时间和空间复杂性。 (c)2020 Elsevier B.v.保留所有权利。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号