【24h】

Energy Aware Grid: Global Workload Placement based on Energy Efficiency

机译:能源感知网格:基于能源效率的全球工作量分配

获取原文
获取原文并翻译 | 示例

摘要

Computing will be pervasive, and enablers of pervasive computing will be data centers housing computing, networking and storage hardware. The data center of tomorrow is envisaged as one containing thousands of single board computing systems deployed in racks. A data center, with 1000 racks, over 30,000 square feet, would require 10 MW of power to power the computing infrastructure. At this power dissipation, an additional 5 MW would be needed by the cooling resources to remove the dissipated heat. At $100/MWh, the cooling alone would cost $4 million per annum for such a data center. The concept of Computing Grid, based on coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, is emerging as the new paradigm in distributed and pervasive computing for scientific as well as commercial applications. We envision a global network of data centers housing an aggregation of computing, networking and storage hardware. The increased compaction of such devices in data centers has created thermal and energy management issues that inhibit sustain ability of such a global infrastructure. In this paper, we propose the framework of Energy Aware Grid that will provide a global utility infrastructure explicitly incorporating energy efficiency and thermal management among data centers. Designed around an energy-aware co-allocator, workload placement decisions will be made across the Grid, based on data center energy efficiency coefficients. The coefficient, evaluated by the data center's resource allocation manager, is a complex function of the data center thermal management infrastructure and the seasonal and diurnal variations. A detailed procedure for implementation of a test case is provided with an estimate of energy savings to justify the economics. An example workload deployment shown in the paper aspires to seek the most energy efficient data center in the global network of data centers. The locality based energy efficiency in a data center is shown to arise from use of ground coupled loops in cold climates to lower ambient temperature for heat rejection e.g. computing and rejecting heat from a data center at nighttime ambient of 20℃ in New Delhi, India while Phoenix, USA is at 45℃. The efficiency in the cooling system in the data center in New Delhi is derived based on lower lift from evaporator to condenser. Besides the obvious advantage due to external ambient, the paper also incorporates techniques that rate the efficiency arising from internal thermo -fluids behavior of a data center in workload placement decision.
机译:计算将是无处不在的,普及计算的推动者将是容纳计算,网络和存储硬件的数据中心。预计未来的数据中心将是一个包含数千个部署在机架中的单板计算系统的数据中心。一个拥有1000个机架,超过30,000平方英尺的数据中心将需要10 MW的电力来为计算基础架构供电。在这种功率消耗下,冷却资源将需要额外的5 MW来散发热量。以100美元/兆瓦时的价格,这样一个数据中心每年的制冷成本将达到400万美元。在动态,多机构的虚拟组织中基于协调资源共享和问题解决的计算网格概念正逐渐成为科学和商业应用中分布式和普适计算的新范例。我们设想了一个数据中心的全球网络,其中包含计算,网络和存储硬件的集合。数据中心中此类设备的日益紧凑化带来了热量和能源管理问题,这些问题阻碍了此类全球基础设施的可持续性。在本文中,我们提出了能源感知网格框架,该框架将提供一个全球公用事业基础设施,明确将数据效率和热管理纳入数据中心之间。围绕能源感知的协同分配器进行设计,将根据数据中心的能源效率系数在整个网格上做出工作负载放置决策。该系数由数据中心的资源分配管理器评估,是数据中心热管理基础架构以及季节和昼夜变化的复杂函数。提供了执行测试用例的详细过程,并提供了节能量的估算值,以证明其经济性。本文显示的示例工作负载部署旨在在全球数据中心网络中寻求最节能的数据中心。数据中心中基于地点的能效被证明是由于在寒冷气候中使用接地耦合环路降低了环境温度以进行散热,例如在印度新德里,而在美国凤凰城,温度为20℃时,计算和排除数据中心夜间环境温度为20℃时的热量。新德里数据中心冷却系统的效率是基于从蒸发器到冷凝器的升程较低而得出的。除了由于外部环境带来的明显优势外,本文还采用了一些技术,这些技术可对数据中心内部热流体行为在工作负载放置决策中产生的效率进行评估。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号