Within multi-agent systems, some agents may delegate tasks to other agents for execution. Recursive delegation designates situations where delegated tasks may, in turn, be delegated onwards. In unconstrained environments, recursive delegation policies based on quitting games are known to outperform policies based on multi-armed bandits. In this work, we incorporate allocation rules and rewarding schemes when considering recursive delegation, and reinterpret the quitting-game approach in terms of coalitions, employing the Shapley and Myerson values to guide delegation decisions. We empirically evaluate our extensions and demonstrate that they outperform the traditional multi-armed bandit based approach, while offering a resource efficient alternative to the quitting-game heuristic.
展开▼