...
首页> 外文期刊>Evolutionary computation >Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes*
【24h】

Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes*

机译:简单的超级启发式控制随机局部搜索的邻域大小用于领导者*

获取原文
获取原文并翻译 | 示例
           

摘要

Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for theLeadingOnesbenchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the "simple" Random Gradient HH so success can be measured over a fixed period of time tau, instead of a single iteration. ForLeadingOneswe prove that theGeneralised Random Gradient (GRG)HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access toklow-level local search heuristics, it outperforms the best-possible algorithm using any subset of thekheuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up ton=10(8)) and shed some light on the best choices for the parameter tau in various situations.
机译:选择超启发式(HHS)是随机搜索方法,从一组低级启发式中选择和执行启发式的启发式。机器学习机制通常用于确定每个决定步骤中应该应用哪些低级启发式。在本文中,我们分析了HHS是否总是需要进行复杂的学习机制。为此,我们考虑来自文献中最简单的HHS,并严格分析他们对TheleadonesBenchmark函数的性能。我们的分析表明,标准简单随机,排列,贪婪和随机梯度HHS显示出没有学习的迹象。虽然前HHS并不试图从过去的低级启发式的过去的表现中学习,但随机梯度HH背后的想法是继续利用当前选择的启发式,只要它成功。因此,它嵌入了具有最短存储器的加强学习机制。然而,在接下来的步骤中,在下一步骤中,在接下来的情况下,在对组合优化问题的合理解决方案中相对较低的可能性。我们概括了“简单”随机梯度HH,因此可以在固定的时间段下测量成功,而不是单一迭代。 Forleadingonewe证明了一定的随机梯度(GRG)HH可以学会在运行期间将随机本地搜索的邻域大小调整到最佳状态。因此,我们证明它具有可实现的最佳性能,以低级别的启发式(随机的本地搜索,不同的社区尺寸),直到较低的术语。我们还证明了HH的性能提高了作为低级本地搜索启发式的数量,可供增加。特别是,通过访问Toklow级本地搜索启发式信息,它可以使用任何TheKheuristics子集更优先于最佳算法。最后,如果考虑随时性能(即,如果寻求近似解决方案而不是精确的解决方案,则使用标准比特突变的GRG通过标准位突变的随机局部搜索和进化算法的优点增加。实验分析证实了这些结果对不同的问题尺寸(Up Ton = 10(8)),并在各种情况下为参数tau的最佳选择提供一些光。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号