首页> 外文学位 >Load balancing and parallelism for the Internet.
【24h】

Load balancing and parallelism for the Internet.

机译:Internet的负载平衡和并行性。

获取原文
获取原文并翻译 | 示例

摘要

Problem. High-speed networks including the Internet backbone suffer from a well-known problem: packets arrive on high-speed routers much faster than commodity memory can support. On a 10 Gb/s link, packets can arrive every 32 ns, while memory can only be accessed once every ∼50ns. By 1997, this was identified as a fundamental problem on the horizon. As link rates increase (usually at the rate of Moore's Law), the performance gap widens and the problem only becomes worse. The problem is hard because packets can arrive in any order and require unpredictable operations to many data structures in memory. And so, similar to many computing systems, router performance is affected by the available memory technology. If we are unable to bridge this performance gap, then---(1) We cannot create Internet routers that reliably support links >10 Gb/s. (2) Routers cannot support the needs of real-time applications such as voice, video conferencing, multimedia, gaming, etc., that require guaranteed performance. (3) Hackers or viruses can easily exploit the memory performance loopholes in a router and bring down the Internet.;Contributions. This thesis lays down a theoretical foundation for solving the memory performance problem in high-speed routers. It brings under a common umbrella several high-speed router architectures, and introduces a general principle called "Constraint Sets" to analyze them. We derive fourteen fundamental, not ephemeral solutions to the memory performance problem. These can be classified under two types---(1) load balancing algorithms that distribute load over slower memories, and guarantee that the memory is available when data needs to be accessed, with no exceptions whatsoever, and (2) caching algorithms which guarantee that data is available in cache 100% of the time. The robust guarantees are surprising, but their validity is proven analytically.;Results and current usage. Our results are practical and at the time of writing, more than 6M instances of our techniques (on over 25 unique product instances) will be made available annually. It is estimated that up to ∼80% of all high-speed Ethernet switches and Enterprise routers in the Internet will use these techniques. Our techniques are currently being designed into the next generation of 100 Gb/s router line cards, and are also planned for deployment in Internet core routers.;Primary consequences. The primary consequence of our results are that---(1) Routers are no longer dependent on memory speeds to achieve high performance. (2) Routers can better provide strict performance guarantees for critical future applications (e.g., remote surgery, supercomputing, distributed orchestras). (3) The router data-path applications for which we provide solutions are safe from malicious memory performance attacks, either now and provably, ever in future.;Secondary consequences. We have modified the techniques in this thesis, to solve the memory performance problems for other router applications, including VOQ buffering, storage, page allocation, and virtual memory management. The techniques have also helped routers to increase memory reliability, simplify memory redundancy, and enable hot-swappable recovery from memory failures. It has helped to reduce worst-case memory power (by ∼25-50%) and automatically reduce average case memory and I/O power (which can result in dramatic power reduction in networks that usually have low utilization). They have enabled the use of complementary memory serialization technologies, reduced pin counts on packet processing ASICs, approximately halved the physical area to build a router line card, and made routers more affordable ( e.g., by reducing memory cost by ∼50%, and significantly reducing ASIC and board costs). In summary, they have led to considerable engineering, economic, and environmental benefits.;Applicability and caveats. Our techniques exploit the fundamental nature of memory access, and so, their applicability is not limited to networking. However, our techniques are not a panacea. As routers become faster and more complex, we need to cater to the memory performance needs of an ever increasing number of router applications. This has resulted in new research pertaining to memory aware algorithmic design.
机译:问题。包括Internet骨干网在内的高速网络都存在一个众所周知的问题:数据包到达高速路由器的速度远远超过了商用内存所能支持的速度。在10 Gb / s的链路上,数据包可以每32 ns到达一次,而内存只能每50 ns左右访问一次。到1997年,这已被确定为迫在眉睫的基本问题。随着链接速率的增加(通常以摩尔定律的速率),性能差距会扩大,问题只会变得更加严重。这个问题很难解决,因为数据包可以以任何顺序到达并且需要对内存中的许多数据结构进行不可预测的操作。因此,类似于许多计算系统,路由器性能会受到可用内存技术的影响。如果我们无法弥合此性能差距,则-(1)我们将无法创建可靠支持大于10 Gb / s链路的Internet路由器。 (2)路由器无法满足需要保证性能的实时应用的需求,例如语音,视频会议,多媒体,游戏等。 (3)黑客或病毒很容易利用路由器中的内存性能漏洞并破坏Internet。本文为解决高速路由器的内存性能问题奠定了理论基础。它概括了几种高速路由器架构,并介绍了称为“约束集”的一般原理来对其进行分析。我们得出了十四种基本的而非暂时的解决方案来解决内存性能问题。可以将它们分为以下两种类型:(1)负载平衡算法,将负载分配到较慢的内存上,并确保在需要访问数据时该内存可用,而无任何例外;(2)缓存算法可以保证该数据在缓存中有100%的时间可用。强大的保证令人惊讶,但是其有效性已通过分析证明。;结果和当前用法。我们的结果是实用的,在撰写本文时,每年将提供超过600万个我们的技术实例(超过25个独特的产品实例)。据估计,互联网中多达80%的高速以太网交换机和企业路由器将使用这些技术。我们的技术目前正在下一代100 Gb / s路由器线卡中进行设计,并且还计划在Internet核心路由器中进行部署。主要后果。我们的结果的主要结果是-(1)路由器不再依赖于内存速度来实现高性能。 (2)路由器可以为未来的关键应用(例如,远程手术,超级计算,分布式乐团)提供更好的严格性能保证。 (3)我们为其提供解决方案的路由器数据路径应用程序,无论现在还是将来,都可以免受恶意内存性能的攻击。我们修改了本文中的技术,以解决其他路由器应用程序的内存性能问题,包括VOQ缓冲,存储,页面分配和虚拟内存管理。这些技术还帮助路由器提高了内存可靠性,简化了内存冗余并实现了从内存故障中进行热插拔恢复。它有助于减少最坏情况下的内存功耗(约25-50%),并自动减少平均情况下的内存和I / O功耗(这可能会导致通常利用率较低的网络中的功耗大大降低)。他们启用了互补的存储器序列化技术,减少了数据包处理ASIC的引脚数,将构建路由器线卡的物理面积减少了一半,并使路由器更便宜(例如,通过将存储器成本降低了约50%,并且显着降低了成本)。降低ASIC和电路板成本)。总之,它们带来了可观的工程,经济和环境效益。我们的技术利用了内存访问的基本性质,因此,它们的适用性不仅限于网络。但是,我们的技术不是万能药。随着路由器变得越来越快,越来越复杂,我们需要满足不断增长的路由器应用程序对内存性能的需求。这导致了有关记忆感知算法设计的新研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号