首页> 外文会议>USENIX Annual Technical Conference >Context-Aware Prefetching at the Storage Server
【24h】

Context-Aware Prefetching at the Storage Server

机译:在存储服务器处的上下文感知预取

获取原文

摘要

In many of today's applications, access to storage constitutes the major cost of processing a user request. Data prefetching has been used to alleviate the storage access latency. Under current prefetching techniques, the storage system prefetches a batch of blocks upon detecting an access pattern. However, the high level of concurrency in today's applications typically leads to interleaved block accesses, which makes detecting an access pattern a very challenging problem. Towards this, we propose and evaluate QuickMine, a novel, lightweight and minimally intrusive method for context-aware prefetching. Under QuickMine, we capture application contexts, such as a transaction or query, and leverage them for context-aware prediction and improved prefetching effectiveness in the storage cache. We implement a prototype of our context-aware prefetching algorithm in a storage-area network (SAN) built using Network Block Device (NBD). Our prototype shows that context-aware prefetching clearly outperforms existing context-oblivious prefetching algorithms, resulting in factors of up to 2 improvements in application latency for two e-commerce workloads with repeatable access patterns, TPC-W and RUBiS.
机译:在今天的许多应用程序中,访问存储构成处理用户请求的主要成本。数据预取已用于缓解存储访问延迟。在当前预取技术下,存储系统在检测到访问模式时预先取代一批块。然而,当今应用程序中的高度并发通常导致交错的块访问,这使得检测访问模式是非常具有挑战性的问题。为此,我们提出并评估了Quickmine,一种新颖,轻质和最小侵入性方法,用于了解上下文预报。在Quickmine下,我们捕获应用程序上下文,例如事务或查询,并利用它们进行上下文感知预测,并在存储缓存中提高预取效果。我们在使用网络块设备(NBD)内置的存储区域网络(SAN)中实现了我们的上下文知识预取算法的原型。我们的原型表明,上下文知识预取显然优于现有的上下文的预取算法,导致两个电子商务工作负载的应用程序延迟的因素,以可重复访问模式,TPC-W和RUBIS。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号