【24h】

Context-Aware Prefetching at the Storage Server

机译:在存储服务器上进行上下文感知的预取

获取原文
获取原文并翻译 | 示例

摘要

In many of today's applications, access to storage constitutes the major cost of processing a user request. Data prefetching has been used to alleviate the storage access latency. Under current prefetching techniques, the storage system prefetches a batch of blocks upon detecting an access pattern. However, the high level of concurrency in today's applications typically leads to interleaved block accesses, which makes detecting an access pattern a very challenging problem. Towards this, we propose and evaluate QuickMine, a novel, lightweight and minimally intrusive method for context-aware prefetching. Under QuickMine, we capture application contexts, such as a transaction or query, and leverage them for context-aware prediction and improved prefetching effectiveness in the storage cache. We implement a prototype of our context-aware prefetching algorithm in a storage-area network (SAN) built using Network Block Device (NBD). Our prototype shows that context-aware prefetching clearly outperforms existing context-oblivious prefetching algorithms, resulting in factors of up to 2 improvements in application latency for two e-commerce workloads with repeatable access patterns, TPC-W and RUBiS.
机译:在当今的许多应用程序中,访问存储构成了处理用户请求的主要成本。数据预取已用于减轻存储访问延迟。在当前的预取技术下,存储系统在检测到访问模式后就预取一批块。但是,当今应用程序中的高并发性通常导致交错的块访问,这使得检测访问模式成为一个非常具有挑战性的问题。为此,我们提出并评估了QuickMine,这是一种用于上下文感知的预取的新颖,轻巧且最少干扰的方法。在QuickMine下,我们捕获应用程序上下文(例如事务或查询),并利用它们进行上下文感知的预测并提高存储缓存中的预取效率。我们在使用网络块设备(NBD)构建的存储区域网络(SAN)中实现上下文感知预取算法的原型。我们的原型表明,上下文感知的预取明显优于现有的上下文无关的预取算法,从而导致两个具有可重复访问模式的电子商务工作负载TPC-W和RUBiS的应用程序延迟最多提高2倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号