Many networking problems suffer from the so-called curse of dimensionality: That is, although excellent (even optimal) solutions exist for these problems, they do not scale well to high speeds or large systems. In various other situations where deterministic algorithms' scalability is poor, randomized versions of the same algorithms are easier to implement and provide surprisingly good performance. For example, recent work in load balancing and for documenting replacement in Web caches provides compelling demonstrations of the effectiveness of these randomized algorithms. Motwani and Raghavan provide other examples and a good introduction to the theory of randomized algorithms.
展开▼