首页> 外文期刊>IEEE Transactions on Parallel and Distributed Systems >An Energy-Oriented Evaluation of Buffer Cache Algorithms Using Parallel I/O Workloads
【24h】

An Energy-Oriented Evaluation of Buffer Cache Algorithms Using Parallel I/O Workloads

机译:使用并行I / O工作负载的面向能量的缓冲区高速缓存算法评估

获取原文
获取原文并翻译 | 示例
           

摘要

Power consumption is an important issue for cluster supercomputers as it directly affects running cost and cooling requirements. This paper investigates the memory energy efficiency of high-end data servers used for supercomputers. Emerging memory technologies allow memory devices to dynamically adjust their power states and enable free rides by overlapping multiple DMA transfers from different I/O buses to the same memory device. To achieve maximum energy saving, the memory management on data servers needs to judiciously utilize these energy-aware devices. As we explore different management schemes under five real-world parallel I/O workloads, we find that the memory energy behavior is determined by a complex interaction among four important factors: (1) cache hit rates that may directly translate performance gain into energy saving, (2) cache populating schemes that perform buffer allocation and affect access locality at the chip level, (3) request clustering that aims to temporally align memory transfers from different buses into the same memory chips, and (4) access patterns in workloads that affect the first three factors.
机译:功耗是集群超级计算机的重要问题,因为它直接影响运行成本和散热要求。本文研究了用于超级计算机的高端数据服务器的内存能效。新兴的存储技术允许存储设备动态调整其电源状态,并通过重叠从不同I / O总线到同一存储设备的多次DMA传输来实现自由行驶。为了实现最大程度的节能,数据服务器上的内存管理需要明智地利用这些节能设备。当我们在五个实际的并行I / O工作负载下探索不同的管理方案时,我们发现内存能量行为是由四个重要因素之间的复杂交互作用决定的:(1)缓存命中率可能直接将性能提升转化为节能,(2)执行缓冲区分配并影响芯片级别访问局部性的缓存填充方案,(3)请求聚类,目的是在时间上将来自不同总线的内存传输对齐到同一内存芯片中,以及(4)工作负载中的访问模式影响前三个因素。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号