基于贝叶斯网络学习和仿真的遗传算法的参数控制——针对表最优排序的案例分析
Parameter Control of Genetic Algorithms by Learning and Simulation of Bayesian Networks —— A Case Study for the Optimal Ordering of Tables
-
摘要: 演化算法的参数设置对于演化计算而言十分重要.参数设置有两种主要的方法:参数调整和参数控制.本文介绍了一种基于贝叶斯网络学习和仿真的、遗传算法的自适应的参数控制方法.贝叶斯网络的节点是被控制的遗传算法的参数,其结构显示了参数之间概率性的条件依赖(非依赖).这是从最佳个体,也就是遗传算法最佳配置学习得到的.通过按照每个特定配置来运行遗传算法,从而对个体进行评价.由于每次运行都十分耗时,每个遗传算法使用小规模的群团并在收敛前停止.通过这种方式,有价值的个体不会被丢失.针对一个行列同步排序的最优搜索问题的实验显示,本文所提出方法不仅能获得与当前最先进的方法同样的最优结果,而且计算时间大大减少.此外,我们的方法还能解决目前尚未得到解决的高维问题.Abstract: Parameter setting for evolutionary algorithms is still an important issue in evolutionary computation. There are two main approaches to parameter setting: parameter tuning and parameter control. In this paper, we introduce self-adaptive parameter control of a genetic algorithm based on Bayesian network learning and simulation. The nodes of this Bayesian network are genetic algorithm parameters to be controlled. Its structure captures probabilistic conditional (in)dependence relationships between the parameters. They are learned from the best individuals, i.e., the best configurations of the genetic algorithm. Individuals are evaluated by running the genetic algorithm for the respective parameter configuration. Since all these runs are time-consuming tasks, each genetic algorithm uses a small-sized population and is stopped before convergence. In this way promising individuals should not be lost. Experiments with an optimal search problem for simultaneous row and column orderings yield the same optima as state-of-the-art methods but with a sharp reduction in computational time. Moreover, our approach can cope with as yet unsolved high-dimensional problems.