首页> 外文期刊>IEEE Transactions on Parallel and Distributed Systems >Using processor-cache affinity information in shared-memory multiprocessor scheduling
【24h】

Using processor-cache affinity information in shared-memory multiprocessor scheduling

机译:在共享内存多处理器调度中使用处理器缓存相似性信息

获取原文
获取原文并翻译 | 示例
           

摘要

In a shared-memory multiprocessor system, it may be more efficient to schedule a task on one processor than on another if relevant data already reside in a particular processor's cache. The effects of this type of processor affinity are examined. It is observed that tasks continuously alternate between executing at a processor and releasing this processor due to I/O, synchronization, quantum expiration, or preemption. Queuing network models of different abstract scheduling policies are formulated, spanning the range from ignoring affinity to fixing tasks on processors. These models are solved via mean value analysis, where possible, and by simulation otherwise. An analytic cache model is developed and used in these scheduling models to include the effects of an initial burst of cache misses experienced by tasks when they return to a processor for execution. A mean-value technique is also developed and used in the scheduling models to include the effects of increased bus traffic due to these bursts of cache misses. Only a small amount of affinity information needs to be maintained for each task. The importance of having a policy that adapts its behavior to changes in system load is demonstrated.
机译:在共享内存的多处理器系统中,如果相关数据已驻留在特定处理器的缓存中,则在一个处理器上调度任务比在另一个处理器上调度任务效率更高。检查了这种类型的处理器关联的影响。可以看出,由于I / O,同步,量子到期或抢占,任务在处理器处执行和释放该处理器之间不断交替变化。制定了不同抽象调度策略的排队网络模型,其范围从忽略亲和力到将任务固定在处理器上。这些模型可以通过均值分析来解决,否则可以通过仿真来解决。开发了一种解析缓存模型,并将其用于这些调度模型中,以包括任务返回处理器执行时经历的缓存未命中的初始突发的影响。还开发了一种均值技术,并将其用于调度模型,以包括由于这些高速缓存未命中突发而导致的总线流量增加的影响。每个任务只需要维护少量的亲和力信息。说明了具有使其行为适应系统负载变化的策略的重要性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号