...
首页> 外文期刊>Computers, IEEE Transactions on >CLU: Co-Optimizing Locality and Utility in Thread-Aware Capacity Management for Shared Last Level Caches
【24h】

CLU: Co-Optimizing Locality and Utility in Thread-Aware Capacity Management for Shared Last Level Caches

机译:CLU:共同优化线程感知的容量管理中的共享本地末级缓存的位置和实用性

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Most chip-multiprocessors nowadays adopt a large shared last-level cache (SLLC). This paper is motivated by our analysis and evaluation of state-of-the-art cache management proposals which reveal a common weakness. That is, the existing alternative replacement policies and cache partitioning schemes, targeted at optimizing either locality or utility of co-scheduled threads, cannot deliver consistently the best performance under a variety of workloads. Therefore, we propose a novel adaptive scheme, called CLU, to interactively co-optimize the locality and utility of co-scheduled threads in thread-aware SLLC capacity management. CLU employs lightweight monitors to dynamically profile the LRU (least recently used) and BIP (bimodal insertion policy) hit curves of individual threads on runtime, enabling the scheme to co-optimize the locality and utility of concurrent threads and thus adapt to more diverse workloads than the existing approaches. We provide results from extensive execution-driven simulation experiments to demonstrate the feasibility and efficacy of CLU over the existing approaches (TADIP, NUCACHE, TA-DRRIP, UCP, and PIPP).
机译:如今,大多数芯片多处理器采用大型共享的最后一级缓存(SLLC)。本文的灵感来自于我们对最先进的缓存管理建议的分析和评估,这些建议揭示了一个共同的弱点。也就是说,旨在优化协同调度线程的局部性或实用性的现有替代替换策略和缓存分区方案无法在各种工作负载下始终如一地提供最佳性能。因此,我们提出了一种新的自适应方案,称为CLU,用于在线程感知的SLLC容量管理中以交互方式共同优化协同调度线程的位置和实用性。 CLU使用轻量级监视器动态分析运行时各个线程的LRU(最近最少使用)和BIP(双峰插入策略)命中曲线,从而使该方案能够共同优化并发线程的局部性和实用性,从而适应各种工作负载比现有的方法。我们从广泛的执行驱动的模拟实验中提供结果,以证明CLU在现有方法(TADIP,NUCACHE,TA-DRRIP,UCP和PIPP)上的可行性和有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号