外文翻译-遗传算法

外文翻译-遗传算法
外文翻译-遗传算法

What is a genetic algorithm?

●Methods of representation

●Methods of selection

●Methods of change

●Other problem-solving techniques

Concisely stated, a genetic algorithm (or GA for short) is a programming technique that mimics biological evolution as a problem-solving strategy. Given a specific problem to solve, the input to the GA is a set of potential solutions to that problem, encoded in some fashion, and a metric called a fitness function that allows each candidate to be quantitatively evaluated. These candidates may be solutions already known to work, with the aim of the GA being to improve them, but more often they are generated at random.

The GA then evaluates each candidate according to the fitness function. In a pool of randomly generated candidates, of course, most will not work at all, and these will be deleted. However, purely by chance, a few may hold promise - they may show activity, even if only weak and imperfect activity, toward solving the problem.

These promising candidates are kept and allowed to reproduce. Multiple copies are made of them, but the copies are not perfect; random changes are introduced during the copying process. These digital offspring then go on to the next generation, forming a new pool of candidate solutions, and are subjected to a second round of fitness evaluation. Those candidate solutions which were worsened, or made no better, by the changes to their code are again deleted; but again, purely by chance, the random variations introduced into the population may have improved some individuals, making them into better, more complete or more efficient solutions to the problem at hand. Again these winning individuals are selected and copied over into the next generation with random changes, and the process repeats. The expectation is that the average fitness of the population will increase each round, and so by repeating this process for hundreds or thousands of rounds, very good solutions to the problem can be discovered.

As astonishing and counterintuitive as it may seem to some, genetic algorithms have proven to be an enormously powerful and successful problem-solving strategy, dramatically demonstrating

the power of evolutionary principles. Genetic algorithms have been used in a wide variety of fields to evolve solutions to problems as difficult as or more difficult than those faced by human designers. Moreover, the solutions they come up with are often more efficient, more elegant, or more complex than anything comparable a human engineer would produce. In some cases, genetic algorithms have come up with solutions that baffle the programmers who wrote the algorithms in the first place!

Methods of representation

Before a genetic algorithm can be put to work on any problem, a method is needed to encode potential solutions to that problem in a form that a computer can process. One common approach is to encode solutions as binary strings: sequences of 1's and 0's, where the digit at each position represents the value of some aspect of the solution. Another, similar approach is to encode solutions as arrays of integers or decimal numbers, with each position again representing some particular aspect of the solution. This approach allows for greater precision and complexity than the comparatively restricted method of using binary numbers only and often "is intuitively closer to the problem space" (Fleming and Purshouse 2002, p. 1228).

This technique was used, for example, in the work of Steffen Schulze-Kremer, who wrote a genetic algorithm to predict the three-dimensional structure of a protein based on the sequence of amino acids that go into it (Mitchell 1996, p. 62). Schulze-Kremer's GA used real-valued numbers to represent the so-called "torsion angles" between the peptide bonds that connect amino acids. (A protein is made up of a sequence of basic building blocks called amino acids, which are joined together like the links in a chain. Once all the amino acids are linked, the protein folds up into a complex three-dimensional shape based on which amino acids attract each other and which ones repel each other. The shape of a protein determines its function.) Genetic algorithms for training neural networks often use this method of encoding also.

A third approach is to represent individuals in a GA as strings of letters, where each letter again stands for a specific aspect of the solution. One example of this technique is Hiroaki Kitano's "grammatical encoding" approach, where a GA was put to the task of evolving a simple set of rules called a context-free

grammar that was in turn used to generate neural networks for a variety of problems (Mitchell 1996, p. 74).

The virtue of all three of these methods is that they make it easy to define operators that cause the random changes in the selected candidates: flip a 0 to a 1 or vice versa, add or subtract from the value of a number by a randomly chosen amount, or change one letter to another. (See the section on Methods of change for more detail about the genetic operators.) Another strategy, developed principally by John Koza of Stanford University and called genetic programming, represents programs as branching data structures called trees (Koza et al. 2003, p. 35). In this approach, random changes can be brought about by changing the operator or altering the value at a given node in the tree, or replacing one subtree with another.

Figure 1:Three simple program trees of the kind normally used in genetic programming. The mathematical expression that each one represents is given underneath.

It is important to note that evolutionary algorithms do not need to represent candidate solutions as data strings of fixed length. Some do represent them in this way, but others do not; for example, Kitano's grammatical encoding discussed above can be efficiently scaled to create large and complex neural networks, and Koza's genetic programming trees can grow arbitrarily large as necessary to solve whatever problem they are applied to.

Methods of selection

There are many different techniques which a genetic algorithm can use to select the individuals to be copied over into the next generation, but listed below are some of the most common methods.

Some of these methods are mutually exclusive, but others can be and often are used in combination.

Elitist selection: The most fit members of each generation are guaranteed to be selected. (Most GAs do not use pure elitism, but instead use a modified form where the single best, or a few of the best, individuals from each generation are copied into the next generation just in case nothing better turns up.)

Fitness-proportionate selection: More fit individuals are more likely, but not certain, to be selected.

Roulette-wheel selection: A form of fitness-proportionate selection in which the chance of an individual's being selected is proportional to the amount by which its fitness is greater or less than its competitors' fitness. (Conceptually, this can be represented as a game of roulette - each individual gets a slice of the wheel, but more fit ones get larger slices than less fit ones. The wheel is then spun, and whichever individual "owns" the section on which it lands each time is chosen.)

Scaling selection: As the average fitness of the population increases, the strength of the selective pressure also increases and the fitness function becomes more discriminating. This method can be helpful in making the best selection later on when all individuals have relatively high fitness and only small differences in fitness distinguish one from another.

Tournament selection: Subgroups of individuals are chosen from the larger population, and members of each subgroup compete against each other. Only one individual from each subgroup is chosen to reproduce.

Rank selection: Each individual in the population is assigned a numerical rank based on fitness, and selection is based on this ranking rather than absolute differences in fitness. The advantage of this method is that it can prevent very fit individuals from gaining dominance early at the expense of less fit ones, which would reduce the population's genetic diversity and might hinder attempts to find an acceptable solution.

Generational selection: The offspring of the individuals selected from each generation become the entire next generation. No individuals are retained between generations.

Steady-state selection: The offspring of the individuals selected from each generation go back into the pre-existing gene pool, replacing some of the less fit members of the previous generation. Some individuals are retained between generations.

Hierarchical selection: Individuals go through multiple rounds of selection each generation. Lower-level evaluations are faster and less discriminating, while those that survive to higher levels are evaluated more rigorously. The advantage of this method is that it reduces overall computation time by using faster, less selective evaluation to weed out the majority of individuals that show little or no promise, and only subjecting those who survive this initial test to more rigorous and more computationally expensive fitness evaluation.

Methods of change

Once selection has chosen fit individuals, they must be randomly altered in hopes of improving their fitness for the next generation. There are two basic strategies to accomplish this. The first and simplest is called mutation. Just as mutation in living things changes one gene to another, so mutation in a genetic algorithm causes small alterations at single points in an individual's code.

The second method is called crossover, and entails choosing two individuals to s of their code, producing artificial "offspring" that are combinations of their parents. This process is intended to simulate the analogous process of recombination that occurs to chromosomes during sexual reproduction. Common forms of crossover include single-point crossover, in which a point of exchange is set at a random location in the two individuals' genomes, and one individual contributes all its code from before that point and the other contributes all its code from after that point to produce an offspring, and uniform crossover, in which the value at any given location in the offspring's genome is either the value of one parent's genome at that location or the value of the other parent's genome at that location, chosen with 50/50 probability.

Figure 2:Crossover and mutation. The above diagrams illustrate the effect of each of these genetic operators on individuals in a population of 8-bit strings. The upper diagram shows two individuals undergoing single-point crossover; the point of exchange is set between the fifth and sixth positions in the genome, producing a new individual that is a hybrid of its progenitors. The second diagram shows an individual undergoing mutation at position 4, changing the 0 at that position in its genome to a 1.

Other problem-solving techniques

With the rise of artificial life computing and the development of heuristic methods, other computerized

problem-solving techniques have emerged that are in some ways similar to genetic algorithms. This section explains some of these techniques, in what ways they resemble GAs and in what ways they differ.

?Neural networks

A neural network, or neural net for short, is a problem-solving method based

on a computer model of how neurons are connected in the brain. A neural

network consists of layers of processing units called nodes joined by

directional links: one input layer, one output layer, and zero or more hidden

layers in between. An initial pattern of input is presented to the input layer of

the neural network, and nodes that are stimulated then transmit a signal to the

nodes of the next layer to which they are connected. If the sum of all the

inputs entering one of these virtual neurons is higher than that neuron's

so-called activation threshold, that neuron itself activates, and passes on its

own signal to neurons in the next layer. The pattern of activation therefore

spreads forward until it reaches the output layer and is there returned as a

solution to the presented input. Just as in the nervous system of biological

organisms, neural networks learn and fine-tune their performance over time

via repeated rounds of adjusting their thresholds until the actual output

matches the desired output for any given input. This process can be supervised

by a human experimenter or may run automatically using a learning algorithm

(Mitchell 1996, p. 52). Genetic algorithms have been used both to build and to

train neural networks.

Figure 3: A simple feedforward neural network, with one input layer consisting of four neurons, one hidden layer consisting of three neurons, and one output layer consisting of four neurons. The number on each neuron represents its activation threshold: it will only fire if it receives at least that many inputs. The diagram shows the neural network being presented with an input string and shows how activation spreads forward through the

network to produce an output.

?Hill-climbing

Similar to genetic algorithms, though more systematic and less random, a

hill-climbing algorithm begins with one initial solution to the problem at hand, usually chosen at random. The string is then mutated, and if the mutation

results in higher fitness for the new solution than for the previous one, the new

solution is kept; otherwise, the current solution is retained. The algorithm is

then repeated until no mutation can be found that causes an increase in the

current solution's fitness, and this solution is returned as the result (Koza et al.

2003, p. 59). (To understand where the name of this technique comes from,

imagine that the space of all possible solutions to a given problem is

represented as a three-dimensional contour landscape. A given set of

coordinates on that landscape represents one particular solution. Those

solutions that are better are higher in altitude, forming hills and peaks; those

that are worse are lower in altitude, forming valleys. A "hill-climber" is then

an algorithm that starts out at a given point on the landscape and moves

inexorably uphill.) Hill-climbing is what is known as a greedy algorithm,

meaning it always makes the best choice available at each step in the hope that

the overall best result can be achieved this way. By contrast, methods such as

genetic algorithms and simulated annealing, discussed below, are not greedy;

these methods sometimes make suboptimal choices in the hopes that they will

lead to better solutions later on.

?Simulated annealing Another optimization technique similar to evolutionary algorithms is known

as simulated annealing. The idea borrows its name from the industrial process

of annealing in which a material is heated to above a critical point to soften it,

then gradually cooled in order to erase defects in its crystalline structure,

producing a more stable and regular lattice arrangement of atoms (Haupt and

Haupt 1998, p. 16). In simulated annealing, as in genetic algorithms, there is a

fitness function that defines a fitness landscape; however, rather than a

population of candidates as in GAs, there is only one candidate solution.

Simulated annealing also adds the concept of "temperature", a global

numerical quantity which gradually decreases over time. At each step of the

algorithm, the solution mutates (which is equivalent to moving to an adjacent

point of the fitness landscape). The fitness of the new solution is then

compared to the fitness of the previous solution; if it is higher, the new

solution is kept. Otherwise, the algorithm makes a decision whether to keep or

discard it based on temperature. If the temperature is high, as it is initially,

even changes that cause significant decreases in fitness may be kept and used

as the basis for the next round of the algorithm, but as temperature decreases,

the algorithm becomes more and more inclined to only accept

fitness-increasing changes. Finally, the temperature reaches zero and the

system "freezes"; whatever configuration it is in at that point becomes the

solution. Simulated annealing is often used for engineering design

applications such as determining the physical layout of components on a

computer chip (Kirkpatrick, Gelatt and Vecchi 1983).

遗传算法是什么?

表示方法

方法的选择

变化的方法

其他解决问题的技术

简明地说,遗传算法(GA)是一种编程技术,模仿生物进化作为一个解决问题的策略。给定一个特定的问题解决,遗传算法的输入是一组潜在的解决这一问题,以某

种方式编码,度量称为适应度函数,允许每个候选人进行定量评估。这些候选人可能解决方案已知工作,GA的目的是改善,但更多的则是随机生成的。

GA然后评估每个候选人根据适应度函数。池中随机生成的候选人,当然,大多数不会工作,这些将被删除。然而,纯粹是一个偶然的机会,几个可能持有的承诺——他们可能显示活动,即使是软弱和不完美的活动,对解决这一问题。

这些有前途的候选人保持和允许复制。是由多个副本,但副本并不完美,在复制过程中介绍了随机变化。这些数字的后代进入下一代,形成一个新游泳池的候选解决方案,并接受第二轮健康评估。这些候选解决方案恶化,或没有更好,再次修改他们的代码删除了;但是,纯粹的偶然,随机变化引入人口可能改进的一些人,让

他们成为更好、更完整的或更有效的解决眼前的问题。这些再次赢得个人选择和复制到下一代随机变化,和重复的过程。预计平均健身的人口将会增加每一轮,

所以通过重复这个过程为成百上千的轮,很好的解决这一问题可以被发现。

惊人的和违反直觉的是,遗传算法已被证明是一个非常强大的和成功的解决问题的策略,极大地展示的力量进化原则。遗传算法已被用于各种领域发展问题解决方案一样困难或更困难比人类所面临的设计师。此外,他们想出的解决方案通常更有效率,更优雅,更复杂的比可比人类工程师会产生。在某些情况下,遗传算法已经提出了解决方案,挡板的程序员写的算法首先!

表示方法

在遗传算法可以工作在任何问题,需要一种方法来编码,问题可能的解决方

案在一个计算机能处理的形式。一个常见的方法是作为二进制字符串编码解决方案:1和0的序列,其中每个位置的数字表示解决方案的某些方面的价值。另一个类似的方法,编码解决方案作为整数或小数的数组,每个位置再次代表一些解决

方案的特定方面。这种方法允许更大的精度和复杂度比仅使用二进制数的相对限制方法,经常“直观地接近问题空间”(弗莱明和Purshouse 2002,p . 1228)。

这种技术被使用,例如,在Steffen Schulze-Kremer的工作,写一个遗传算法来预测蛋白质的三维结构的基础上进入它的氨基酸序列(米切尔1996,p . 1996)。Schulze-Kremer的GA使用元数据来代表所谓的“扭转角度”之间的连接氨基酸的肽债券。(一种蛋白质是由一系列的基本构建块称为氨基酸,这是连接在一起的链接。一旦所有的氨基酸有关,蛋白质折叠成一个复杂的三维形状基于氨基酸相互吸引和哪些相互排斥。蛋白质的形状决定了它的功能。)遗传算法训练神经网络也经常使用这种编码方法。

第三种方法是代表个人的GA作为字符串字母,再次,每个字母代表一个特定方面的解决方案。这种技术的一个例子是Hiroaki北野的“语法编码”的方法,在遗传算法的任务是发展一套简单的规则称为上下文无关文法,反过来用于生成各种问题的神经网络(米切尔1996,p . 1996)。

这三种方法的优点是,他们可以很容易定义运营商导致的随机变化选择候选人:翻转一个0到1,反之亦然,添加或减去数量由一个随机选择的价值金额,或改变一个字母到另一个地方。(见章节的方法改变关于遗传算子的更多细节)。另一个策略,主要是由美国斯坦福大学的约翰?科扎(John Koza和遗传规划,代表项目分支数据结构称为树(Koza et al . 2003年,p . 35)。在这种方法中,随机变化可以带来改变运营商或改变值在给定节点树,或与另一个取代一个子树。

图1:三个简单程序的树木通常用于遗传规划。的数学表达式,给出每一个代表。

重要的是要注意,进化算法不需要代表候选解决方案为固定长度的数据字符串。一些代表他们在这种方式,但别人不愿这样做;例如,北野的语法编码上面讨论可以有效地扩展创建大型和复杂的神经网络,和Koza遗传规划树可以任意大的必

要的应用来解决任何问题。

方法的选择

有许多不同的技术,遗传算法可以用来选择个人被复制到下一代,但下面列出的

是一些最常见的方法。这些方法是相互排斥的,但其他人可以和往往是结合使用。

精英选择:最适合每一代的成员是保证被选中。(大多数气体不使用纯粹的精英主义,而是使用一种修改的最好的地方,或几个最好的,从每一代个体复制到下一代以防没有更好的出现。)

Fitness-proportionate选择:更适合个人更有可能,但不确定,选择。

轮盘赌选择:一种fitness-proportionate选择的个体被选中的概率成正比的数量健身是大于或小于其竞争对手的健康。(从概念上讲,这可以表示为一个轮盘赌的游戏,每个人得到一片轮子,但更适合的片比不太适合的。然后旋转轮,无论个人“拥有”的部分土地每次选择。)

缩放选择:随着人口的平均健康增加,选择压力的强度也会增加,适应度函数变得更加挑剔。这个方法可以有助于使最好的选择稍后当所有个人有相对较高的健康和健身区分一个从另一个只有微小的差异。

锦标赛选择:个人选择的子组更大的人口,和每个子群的成员相互竞争。只有一个个人从每个子群选择繁殖。

选择:每个人的人口被分配一个数值排名基于健身,和选择是基于这个排名而不

是绝对健康的差异。这种方法的优点是,它可以阻止非常适合个人获得早期的主导地位不太适合的,这将减少人口的遗传多样性,可能阻碍尝试找到一个可接受

的解决方案。

世代选择:从每一代个体的后代选择成为整个下一代。几代人之间没有个人留存。

稳态选择:从每一代个体的后代选择回到预先存在的基因库,替换一些不适合上

一代的成员。一些人保留了两代人之间。

分层选择:个人要经过多次选择每一代。低级别评估更快和更少的歧视,而那些生存更高更严格地评估。这种方法的优点是,它可以减少总的计算时间用更快、更少的选择性评价剔除大部分个人显示很少或根本没有承诺,只有对那些生存这个初始测试更严格、更计算昂贵的健身评价。

改变的方法

一旦选择选择适合个人,他们必须随机改变,希望改善他们的健康的下一代。有两种基本策略来完成这项工作。第一和最简单的称为变异。就像一个基因突变生物变化到另一个地方,所以变异遗传算法导致个体的单点小改变代码。

第二种方法称为交叉,需要选择两个人交换部分的代码,生产人造“后代”的组合他们的父母。这个过程的目的是模拟类似的重组过程发生在有性繁殖染色体。交叉的常见形式包括单点交叉,交换一个点被设置在一个随机的位置在两个人的基因组,一个个人贡献所有的代码之前,这一点和其他贡献所有的代码之后,指出产生后代,和均匀交叉的价值在任何给定位置的后代的基因组是一个父母的基因组在这个位置的价值或价值的其他父母的基因组在这个位置,选择有50/50的概率。

图2:交叉和变异。上面的图表说明了每一种遗传算子的影响对个人在人口的8位字符串。上面的图表显示了两个个体进行单点交叉,交流的目的是将在第五和第六的位置在基因组中,产生新的个体,是一个混合的祖细胞。第二个图显示了一个4个人经历突变位置,改变0在其基因组的那个位置1。

其他解决问题的技术

随着人工生命计算的兴起和发展的启发式方法,其他计算机解决问题的技术已经出现在某些方面类似于遗传算法。本节解释其中的一些技术,以何种方式类似于气体和以何种方式是不同的。

神经网络

神经网络,简称神经网络,是一个基于计算机模型解决问题的方法在大脑中神经元是如何连接的。神经网络由层处理单元的节点加入了定向链接:一个输入层、一个输出层,和零个或多个隐藏层。初始的输入模式提出了神经网络的输入层,刺激和节点的节点传输一个信号的下一层连接。如果输入的总和进入其中一个虚拟神经元高于所谓神经元的激活阈值,激活神经元本身,将自己的信号传递给下一层神经元。激活的模式因此向前传播,直到它到达输出层和有作为解决返回输入。就像在生物有机体的神经系统,神经网络学习和调整他们的性能随着时间的推移,通过重复轮调整阈值,直到实际产出匹配所需的输出对于任何给定的输入。这个过程可以由人类实验者监管或自动运行使用学习算法(米切尔,1996年52页)。遗传算法被用来构建和训练神经网络。

图3:一个简单的前馈神经网络,一个输入层神经元组成的四个,一个隐藏层神经元组成的三个,和一个输出层组成的四个神经元。数量在每个神经元代表其激活阈值:它只会火,如果收到至少,许多输入。在显示一个图表显示了神经网络输入字符串并显示如何激活扩散通过网络转发给生成一个输出。

爬山算法

外文翻译---采用遗传算法优化加工夹具定位和加紧位置

附录 Machining fixture locating and clamping position optimization using genetic algorithms Necmettin Kaya* Department of Mechanical Engineering, Uludag University, Go¨ru¨kle, Bursa 16059, Turkey Received 8 July 2004; accepted 26 May 2005 Available online 6 September 2005 Abstract Deformation of the workpiece may cause dimensional problems in machining. Supports and locators are used in order to reduce the error caused by elastic deformation of the workpiece. The optimization of support, locator and clamp locations is a critical problem to minimize the geometric error in workpiece machining. In this paper, the application of genetic algorithms (GAs) to the fixture layout optimization is presented to handle fixture layout optimization problem. A genetic algorithm based approach is developed to optimise fixture layout through integrating a finite element code running in batch mode to compute the objective function values for each generation. Case studies are given to illustrate the application of proposed approach. Chromosome library approach is used to decrease the total solution time. Developed GA keeps track of previously analyzed designs; therefore the numbers of function evaluations are decreased about 93%. The results of this approach show that the fixture layout optimization problems are multi-modal problems. Optimized designs do not have any apparent similarities although they provide very similar performances. Keywords: Fixture design; Genetic algorithms; Optimization 1. Introduction Fixtures are used to locate and constrain a workpiece during a machining operation, minimizing workpiece and fixture tooling deflections due to clamping and cutting forces are critical to ensuring accuracy of the machining operation. Traditionally, machining fixtures are designed and manufactured through trial-and-error, which prove to be both expensive and time-consuming to the manufacturing process. To ensure a workpiece is manufactured according to specified dimensions and tolerances, it must be appropriately located and clamped, making it imperative to develop tools that will eliminate costly and time-consuming trial-and-error designs. Proper

遗传算法在BP神经网络优化中的应用.

遗传算法在 BP 神经网络优化中的应用 2O世纪80年代后期,多机器人协作成为一种新的机器人应用形式日益引起国内外学术界的兴趣与关注。一方面,由于任务的复杂性,在单机器人难以完成任务时,人们希望通过多机器人之间的协调与合作来完成。另一方面,人们也希望通过多机器人间的协调与合作,来提高机器人系统在作业过程中的效率。1943年,Maeullocu和 Pitts融合了生物物理学和数学提出了第一个神经元模型。从这以后,人工神经网络经历了发展、停滞、再发展的过程,时至今日正走向成熟,在广泛领域里得到了应用,其中将人工神经网络技术应用到多机器人协作成为新的研究领域。本文研究通过人工神经网络控制多机器人完成协作搬运的任务-3 J,并应用遗传算法来对神经网络进行优化。仿真结果表明,经过遗传算法优化后的搬运工作效率显著提高,误差降低。 1 人工神经网络 ANN)的基本原理和结构 人工神经网络(Artiifcial Neural Network,ANN)) 是抽象、简化与模拟大脑神经结构的计算模型,又称并行分布处理模型 J。ANN 由大量功能简单且具有自适应能力的信息处理单元——人工神经元按照大规模并行的方式通过一定的拓扑结构连接而成。ANN拓扑结构很多,其中采用反向传播(Back-Propa- gation,BP)算法的前馈型神经网络(如下图1所示),即BP人工神经网络,是人工神经网络中最常用、最成熟的神经网络之一。 BP网络模型处理信息的基本原理是:输入信号x;通过中间节点(隐层点 )作用于出节点,经过非线形变换,产生输出信Yk,网络训练的每个样本包括输入向量 x和期望输出量 T,网络输出值Y与期望输出值T之间的偏差,通过调整输入节点与隐层节点的联接强度取值w;;和隐层节点与输出节点之间的联接强度Y以及阈值,使误差沿梯度方向下降,经过反复学习训练,确定与最小误差相对应的网络参数 (权值和阈值),训练即告停止。此时经过训练的神经网络即能对类似样本的输入信息,自行处理输出误差最小的经过非线形转换的信息。

蚁群算法蚂蚁算法中英文对照外文翻译文献

蚁群算法蚂蚁算法中英文对照外文翻译文献(文档含英文原文和中文翻译)

翻译: 蚁群算法 起源 蚁群算法(ant colony optimization, ACO),又称蚂蚁算法,是一种用来在图中寻找优化路径的机率型算法。它由Marco Dorigo于1992年在他的博士论文中提出,其灵感来源于蚂蚁在寻找食物过程中发现路径的行为。蚁群算法是一种模拟进化算法,初步的研究表明该算法具有许多优良的性质.针对PID 控制器参数优化设计问题,将蚁群算法设计的结果与遗传算法设计的结果进行了比较,数值仿真结果表明,蚁群算法具有一种新的模拟进化优化方法的有效性和应用价值。 原理 各个蚂蚁在没有事先告诉他们食物在什么地方的前提下开始寻找食物。当一只找到食物以后,它会向环境释放一种信息素,吸引其他的蚂蚁过来,这样越来越多的蚂蚁会找到食物!有些蚂蚁并没有象其它蚂蚁一样总重复同样的路,他们会另辟蹊径,如果令开辟的道路比原来的其他道路更短,那么,渐渐地更多的蚂蚁被吸引到这条较短的路上来。最后,经过一段时间运行,可能会出现一条最短的路径被大多数蚂蚁重复着。 为什么小小的蚂蚁能够找到食物?他们具有智能么?设想,如果我们要为蚂蚁设计一个人工智能的程序,那么这个程序要多么复杂呢?首先,你要让蚂蚁能够避开障碍物,就必须根据适当的地形给它编进指令让他们能够巧妙的避开障碍物,其次,要让蚂蚁找到食物,就需要让他们遍历空间上的所有点;再次,如果要让蚂蚁找到最短的路径,那么需要计算所有可能的路径并且比

较它们的大小,而且更重要的是,你要小心翼翼的编程,因为程序的错误也许会让你前功尽弃。这是多么不可思议的程序!太复杂了,恐怕没人能够完成这样繁琐冗余的程序。 然而,事实并没有你想得那么复杂,上面这个程序每个蚂蚁的核心程序编码不过100多行!为什么这么简单的程序会让蚂蚁干这样复杂的事情?答案是:简单规则的涌现。事实上,每只蚂蚁并不是像我们想象的需要知道整个世界的信息,他们其实只关心很小范围内的眼前信息,而且根据这些局部信息利用几条简单的规则进行决策,这样在蚁群这个集体里,复杂性的行为就会凸现出来。这就是人工生命、复杂性科学解释的规律!那么,这些简单规则是什么呢? 1、范围: 蚂蚁观察到的范围是一个方格世界,蚂蚁有一个参数为速度半径(一般是3),那么它能观察到的范围就是3*3个方格世界,并且能移动的距离也在这个范围之内。 2、环境: 蚂蚁所在的环境是一个虚拟的世界,其中有障碍物,有别的蚂蚁,还有信息素,信息素有两种,一种是找到食物的蚂蚁洒下的食物信息素,一种是找到窝的蚂蚁洒下的窝的信息素。每个蚂蚁都仅仅能感知它范围内的环境信息。环境以一定的速率让信息素消失。 3、觅食规则: 在每只蚂蚁能感知的范围内寻找是否有食物,如果有就直接过去。否则看是否有信息素,并且比较在能感知的范围内哪一点的信息素最多,这样,它就朝信息素多的地方走,并且每只蚂蚁都会以小概率犯错误,从而并不是往信

外文翻译-遗传算法

What is a genetic algorithm? ●Methods of representation ●Methods of selection ●Methods of change ●Other problem-solving techniques Concisely stated, a genetic algorithm (or GA for short) is a programming technique that mimics biological evolution as a problem-solving strategy. Given a specific problem to solve, the input to the GA is a set of potential solutions to that problem, encoded in some fashion, and a metric called a fitness function that allows each candidate to be quantitatively evaluated. These candidates may be solutions already known to work, with the aim of the GA being to improve them, but more often they are generated at random. The GA then evaluates each candidate according to the fitness function. In a pool of randomly generated candidates, of course, most will not work at all, and these will be deleted. However, purely by chance, a few may hold promise - they may show activity, even if only weak and imperfect activity, toward solving the problem. These promising candidates are kept and allowed to reproduce. Multiple copies are made of them, but the copies are not perfect; random changes are introduced during the copying process. These digital offspring then go on to the next generation, forming a new pool of candidate solutions, and are subjected to a second round of fitness evaluation. Those candidate solutions which were worsened, or made no better, by the changes to their code are again deleted; but again, purely by chance, the random variations introduced into the population may have improved some individuals, making them into better, more complete or more efficient solutions to the problem at hand. Again these winning individuals are selected and copied over into the next generation with random changes, and the process repeats. The expectation is that the average fitness of the population will increase each round, and so by repeating this process for hundreds or thousands of rounds, very good solutions to the problem can be discovered. As astonishing and counterintuitive as it may seem to some, genetic algorithms have proven to be an enormously powerful and successful problem-solving strategy, dramatically demonstrating

基于遗传算法的BP神经网络MATLAB代码

用遗传算法优化BP神经网络的Matlab编程实例(转) 由于BP网络的权值优化是一个无约束优化问题,而且权值要采用实数编码,所以直接利用Matlab遗传算法工具箱。以下贴出的代码是为一个19输入变量,1个输出变量情况下的非线性回归而设计的,如果要应用于其它情况,只需改动编解码函数即可。 程序一:GA训练BP权值的主函数 function net=GABPNET(XX,YY) %-------------------------------------------------------------------------- % GABPNET.m % 使用遗传算法对BP网络权值阈值进行优化,再用BP算法训练网络 %-------------------------------------------------------------------------- %数据归一化预处理 nntwarn off XX=[1:19;2:20;3:21;4:22]'; YY=[1:4]; XX=premnmx(XX); YY=premnmx(YY); YY %创建网络 net=newff(minmax(XX),[19,25,1],{'tansig','tansig','purelin'},'tra inlm'); %下面使用遗传算法对网络进行优化 P=XX; T=YY; R=size(P,1); S2=size(T,1); S1=25;%隐含层节点数 S=R*S1+S1*S2+S1+S2;%遗传算法编码长度 aa=ones(S,1)*[-1,1]; popu=50;%种群规模 save data2 XX YY % 是将 xx,yy 二个变数的数值存入 data2 这个MAT-file,initPpp=initializega(popu,aa,'gabpEval');%初始化种群 gen=100;%遗传代数

外文翻译---遗传算法在非线性模型中的应用

英文翻译 2011 届电气工程及其自动化专业 0706073 班级 题目遗传算法在非线性模型中的应用 姓名学号070607313

英语原文: Application of Genetic Programming to Nonlinear Modeling Introduction Identification of nonlinear models which are based in part at least on the underlying physics of the real system presents many problems since both the structure and parameters of the model may need to be determined. Many methods exist for the estimation of parameters from measures response data but structural identification is more difficult. Often a trial and error approach involving a combination of expert knowledge and experimental investigation is adopted to choose between a number of candidate models. Possible structures are deduced from engineering knowledge of the system and the parameters of these models are estimated from available experimental data. This procedure is time consuming and sub-optimal. Automation of this process would mean that a much larger range of potential model structure could be investigated more quickly. Genetic programming (GP) is an optimization method which can be used to optimize the nonlinear structure of a dynamic system by automatically selecting model structure elements from a database and combining them optimally to form a complete mathematical model. Genetic programming works by emulating natural evolution to generate a model structure that maximizes (or minimizes) some objective function involving an appropriate measure of the level of agreement between the model and system response. A population of model structures evolves through many generations towards a solution using certain evolutionary operators and a “survival-of-the-fittest”selection scheme. The parameters of these models may be estimated in a separate and more conventional phase of the complete identification process.

遗传算法

遗传算法的基本理论 一、起源: 早在20世纪50年代和60年代,就有少数人几个计算机科学家独立地进行了所谓的“人工进化系统”研究,其出发点是进化的思想可以发展成为许多工程问题的优化工具。早期的研究形成了遗传算法的雏形,如大多数系统都遵循“适者生存”的仿自然法则,有些系统采用了基于群体(population)的设计方案,并且加入了自然选择与变异操作,还有一些系统对生物染色体编码进行了抽象处理,应用二进制编码。由于缺乏一种通用的编码方案,人们只能依赖变异而非交叉来产生新的基因结构,早期的算法收敛甚微。20世纪60年代中期,美国Michigan大学的John Holland在A.S.Fraser和H.J.Bremermann等人工作的基础上提出了位串编码技术。这种编码既适用于变异操作,又适用于交叉(即杂交)操作。并且强调将交叉作为主要的遗传操作。随后,Holland将该算法用于自然和人工系统的自适应行为的研究中,并于1975年出版了其开创性著作“Adaption in Natural and Artificial System”。以后,Holland等人将该算法加以推广,应用到优化及机器学习等问题中,并正式定名为遗传算法。遗传算法的通用编码技术和简单有效的遗传操作作为其广泛、成功地应用奠定了基础。Holland早期有关遗传算法的许多概念一直沿用至今,可见Holland对遗传算法的贡献之大。他认为遗传算法本质上是适应算法,应用最多的是系统最优化的研究。 二、发展: 年份贡献者内容 1962Holland程序漫游元胞计算机自适应系统框架 1968Holland模式定理的建立 1971Hollstein具有交配和选择规则的二维函数优化 1972Bosworth、Foo、Zeigler提出具有复杂变异、类似于遗传算法的基因操作1972Frantz位置非线性和倒位操作研究 1973Holland遗传算法中试验的最优配置和双臂强盗问题 1973Martin类似遗传真法的概率算法理论 1975De Jong用于5个测试函数的研究基本遗传算法基准参数 1975Holland 出版了开创性著作《Adaptation in Natural and Artificial System》 1981Bethke应用Walsh函数分析模式 1981Brindle研究遗传算法中的选择和支配问题 1983Pettit、Swigger遗传算法应用于非稳定问题的粗略研究1983Wetzel用遗传算法解决旅行商问题(TSP) 1984Mauldin基本遗传算法小用启发知识维持遗传多样性1985Baker试验基于排序的选择方法 1985Booker建议采用部分匹配计分、分享操作和交配限制法1985Goldberg、Lingle TSP问题个采用部分匹配交叉 1985Grefenstette、Fitzpattrick对含噪声的函数进行测试 1985Schaffer多种群遗传算法解决多目标优化问题1986Goldberg最优种群大小估计 1986Grefenstette元级遗传算法控制的遗传算法 1987Baker选择中随机误差的减少方法 1987Goldberg复制和交叉时最小欺骗问题(MDP) 1987Goldberg、Richardson借助分享函数的小生境和物种归纳法

神经网络的应用及其发展

神经网络的应用及其发展 [摘要] 该文介绍了神经网络的发展、优点及其应用和发展动向,着重论述了神经网络目前的几个研究热点,即神经网络与遗传算法、灰色系统、专家系统、模糊控制、小波分析的结合。 [关键词]遗传算法灰色系统专家系统模糊控制小波分析 一、前言 神经网络最早的研究20世纪40年代心理学家Mcculloch和数学家Pitts合作提出的,他们提出的MP模型拉开了神经网络研究的序幕。神经网络的发展大致经过三个阶段:1947~1969年为初期,在这期间科学家们提出了许多神经元模型和学习规则,如MP模型、HEBB学习规则和感知器等;1970~1986年为过渡期,这个期间神经网络研究经过了一个低潮,继续发展。在此期间,科学家们做了大量的工作,如Hopfield教授对网络引入能量函数的概念,给出了网络的稳定性判据,提出了用于联想记忆和优化计算的途径。1984年,Hiton教授提出Boltzman机模型。1986年Kumelhart等人提出误差反向传播神经网络,简称BP 网络。目前,BP网络已成为广泛使用的网络;1987年至今为发展期,在此期间,神经网络受到国际重视,各个国家都展开研究,形成神经网络发展的另一个高潮。神经网络具有以下优点: (1) 具有很强的鲁棒性和容错性,因为信息是分布贮于网络内的神经元中。 (2) 并行处理方法,使得计算快速。 (3) 自学习、自组织、自适应性,使得网络可以处理不确定或不知道的系统。 (4) 可以充分逼近任意复杂的非线性关系。 (5) 具有很强的信息综合能力,能同时处理定量和定性的信息,能很好地协调多种输入信息关系,适用于多信息融合和多媒体技术。 二、神经网络应用现状 神经网络以其独特的结构和处理信息的方法,在许多实际应用领域中取得了显著的成效,主要应用如下: (1) 图像处理。对图像进行边缘监测、图像分割、图像压缩和图像恢复。

基于遗传算法的库位优化问题

Logistics Sci-Tech 2010.5 收稿日期:2010-02-07 作者简介:周兴建(1979-),男,湖北黄冈人,武汉科技学院经济管理学院,讲师,武汉理工大学交通学院博士研究生,研究方向:物流价值链、物流系统规划;刘元奇(1988-),男,甘肃天水人,武汉科技学院经济管理学院;李泉(1989-),男,湖北 武汉人,武汉科技学院经济管理学院。 文章编号:1002-3100(2010)05-0038-03 物流科技2010年第5期Logistics Sci-Tech No.5,2010 摘 要:应用遗传算法对邯运集团仓库库位进行优化。在充分考虑邯运集团仓库所存放的货物种类、货物数量、出入库频 率等因素的基础上进行库位预分区规划,建立了二次指派问题的数学模型。利用遗传算法对其求解,结合MATLAB 进行编程计算并得出最优划分方案。 关键词:遗传算法;预分区规划;库位优化中图分类号:F253.4 文献标识码:A Abstract:The paper optimize the storage position in warehouse of Hanyun Group based on genetic algorithm.With thinking of the factors such as goods categories,quantities and frequencies of I/O,etc,firstly,the storage district is planned.Then the model of quadratic assignment problems is build,and genetic algorithm is utilized to resolve the problem.The software MATLAB is used to program and figure out the best alternatives. Key words:genetic algorithm;district planning;storage position optimization 1 库位优化的提出 邯郸交通运输集团有限公司(简称“邯运集团”)是一家集多种业务为一体的大型综合性物流企业。邯运集团的主要业务板块有原料采购(天信运业及天昊、天诚、天恒等)、快递服务(飞马快运)、汽贸业务(天诚汽贸)及仓储配送(河北快运)等。其中,邯运集团的仓储配送业务由河北快运经营,现有仓库面积总共40000㎡,主要的业务范围为医药、日用百货、卷烟、陶瓷、化工产品的配送,其中以医药为主。邯运集团库存货物主要涉及两个方面:一个是大宗的供应商货物,如医药,化工产品等;另一方面主要是大规模的小件快递货物,如日用百货等[1]。经分析,邯运集团在仓储运作方面存在如下问题: (1)存储货物繁多而分拣速度低下。仓库每天到货近400箱,有近200多种规格,缺乏一套行之有效的仓储管理系统。(2)货架高度不当而货位分配混乱。现在采用的货架高度在2米以上,而且将整箱货物直接码垛在货架上,不严格按货位摆放。当需要往货架最上层码放货物需要借助梯子,增加操作难度且操作效率较低。货物在拣货区货架摆放是以件为单位的,分拣和搬运速度较慢。 (3)拣货货架设计不当而仓储效率低下。发货前装箱工作主要由人工协同完成,出库效率低,出错率难以控制。 (4)存储能力和分拣能力不能满足需求。根据邯运集团的业务发展现状及趋势,现有的仓库储存和分拣能力远远达不到集团公司对配送业务量的需求。 当前邯运集团的货位分配主要采用物理地址编码的方式,很少考虑货位分配对仓储管理员工作效率的影响。对其进行库位优化设计不仅直接影响到其库存量的大小、出入库的效率,还间接影响到邯运集团的整体经营效益。本文对邯运集团的仓库货位进行优化时,结合考虑仓库所存放的货物种类、货物数量、出入库频率等因素,对仓库货位进行规划,以提高仓储效率。 2库位预分区规划 在进行仓库货位规划时,作如下假设: (1)货物的存放种类已知; (2)货物每种类的单位时间内存放的数量己知; (3) 每一种货物的存取频率已知。 在仓库货位优化中一个重要的环节即预分区。所谓预分区,是指没有存放货物时的分区,分区时只考虑仓储作业人员的速基于遗传算法的库位优化问题 Optimization of Storage Position in Warehouse Based on Genetic Algorithm 周兴建1,2,刘元奇1,李泉1 ZHOU Xing-jian 1,2,LIU Yuan-qi 1,LI Quan 1 (1.武汉科技学院经济管理学院,湖北武汉430073;2.武汉理工大学交通学院,湖北武汉430063) (1.College of Economics &Management,Wuhan University of Science &Engineering,Wuhan 430073,China; 2.School of Transportation,Wuhan University of Technology,Wuhan 430063,China) !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 38

遗传算法与神经网络的结合.

系统工程理论与实践 Systems Engineering——Theory & Practice 1999年第2期第19卷 vol.19 No.2 1999 遗传算法与神经网络的结合 李敏强徐博艺寇纪淞 摘要阐明了遗传算法和神经网络结合的必要性和可行性,提出用多层前馈神经网络作为遗传搜索的问题表示方式的思想。用遗传算法和神经网络结合的方法求解了短期地震预报问题,设计了用遗传算法训练神经网络权重的新方法,实验结果显示了遗传算法快速学习网络权重的能力,并且能够摆脱局部极点的困扰。 关键词遗传算法进化计算神经网络 On the Combination of Genetic Algorithms and Neural Networks Li Minqiang Xu Boyi Kou Jisong (Institute of Systems Engineering, Tianjin University, Tianjin 300072 Abstract In this paper, we demonstrate the necessity and possibility of combining neural network (NN with GAs. The notion of using multilayered feed forward NN as the representation method of genetic and the searching technique is introduced. We combine GA and NN for solving short term earthquake forecasting problem, design a novel method of using GAs to train connection weights of NN.The empirical test indicates the capability of the new method in fast learning of NN and escaping local optima. Keywords genetic algorithms; evolutionary computation; neural networks

神经网络和遗传算法的模糊系统的自动设计论文中英文资料对照外文翻译

基于神经网络和遗传算法的模糊系统的自动设计摘要 本文介绍了基于神经网络和遗传算法的模糊系统的设计,其目的在于缩短开发时间并提高该系统的性能。介绍一种利用神经网络来描绘的多维非线性隶属函数和调整隶属函数参数的方法。还提及了基于遗传算法的集成并自动化三个模糊系统的设计平台。 1 前言 模糊系统往往是人工手动设计。这引起了两个问题:一是由于人工手动设计是费时间的,所以开发费用很高;二是无法保证获得最佳的解决方案。为了缩短开发时间并提高模糊系统的性能,有两种独立的途径:开发支持工具和自动设计方法。前者包括辅助模糊系统设计的开发环境。许多环境已具有商业用途。后者介绍了自动设计的技术。尽管自动设计不能保证获得最优解,他们仍是可取的手工技巧,因为设计是引导走向和依某些标准的最优解。 有三种主要的设计决策模糊控制系统设计: (1)确定模糊规则数, (2)确定隶属度函数的形式。 (3)确定变化参数 再者,必须作出另外两个决定: (4)确定输入变量的数量 (5)确定论证方法 (1)和(2)相互协调确定如何覆盖输入空间。他们之间有高度的相互依赖性。(3)用以确定TSK(Takagi-Sugeno-Kang)模式【1】中的线性方程式的系数,或确定隶属度函数以及部分的Mamdani模型【2】。(4)符合决定最低套相关的输入变量,计算所需的目标决策或控制的价值观。像逆向消除(4)和信息标准的技术在此设计中经常被利用。(5)相当于决定使用哪一个模糊算子和解模糊化的方法。虽然由数种算法和模糊推理的方法已被提出,仍没有选择他们标准。[5]表明动态变化的推理方法,他依据这个推理环境的结果在性能和容错性高于任何固定的推理的方法。 神经网络模型(以更普遍的梯度)和基于遗传算法的神经网络(最常见的梯度的基础)和遗传算法被用于模糊系统的自动设计。基于神经网络的方法主要是用来设计模糊隶属度函数。这有两种主要的方法; (一)直接的多维的模糊隶属度函数的设计: 该方法首先通过数据库确定规则的数目。然后通过每个簇的等级的训练来确定隶属函数的形式。更多细节将在第二章给出。 (二)间接的多维的模糊隶属度函数的设计: 这种方法通过结合一维模糊隶属函数构建多维的模糊隶属度函数。隶属度函数梯度技术被用于调节试图减少模糊系统的期望产量和实际生产所需的产出总量的误差。 第一种方法的优点在于它可以直接产生非线性多维的模糊隶属度函数;没有必要通过结合一维模糊隶属函数构建多维的模糊隶属度函数。第二种方法的优点在于可通过监测模糊系统的最后性能来调整。这两种方法都将在第二章介绍。 许多基于遗传算法的方法与方法二在本质上一样;一维隶属函数的形式利用遗传算法

遗传算法优化BP神经网络的实现代码-共6页

%读取数据 data=xlsread('data.xls'); %训练预测数据 data_train=data(1:113,:); data_test=data(118:123,:); input_train=data_train(:,1:9)'; output_train=data_train(:,10)'; input_test=data_test(:,1:9)'; output_test=data_test(:,10)'; %数据归一化 [inputn,mininput,maxinput,outputn,minoutput,maxoutput]=premnmx(input_tr ain,output_train); %对p和t进行字标准化预处理 net=newff(minmax(inputn),[10,1],{'tansig','purelin'},'trainlm'); net.trainParam.epochs=100; net.trainParam.lr=0.1; net.trainParam.goal=0.00001; %net.trainParam.show=NaN %网络训练 net=train(net,inputn,outputn); %数据归一化 inputn_test = tramnmx(input_test,mininput,maxinput); an=sim(net,inputn); test_simu=postmnmx(an,minoutput,maxoutput); error=test_simu-output_train; plot(error) k=error./output_train

遗传算法的应用研究_赵夫群

2016年第17期 科技创新科技创新与应用 遗传算法的应用研究 赵夫群 (咸阳师范学院,陕西咸阳712000) 1概述 遗传算法(Genetic Algorithms,GA)一词源于人们对自然进化系统所进行的计算机仿生模拟研究,是以达尔文的“进化论”和孟德尔的“遗传学原理”为基础的,是最早开发出来的模拟遗传系统的算法模型。遗传算法最早是由Fraser提出来的,后来Holland对其进行了推广,故认为遗传算法的奠基人是Holland。 随着遗传算法的不断完善和成熟,其应用范围也在不断扩大,应用领域非常广泛,主要包括工业控制、网络通讯、故障诊断、路径规划、最优控制等。近几年,出现了很多改进的遗传算法,改进方法主要包括:应用不同的交叉和变异算子;引入特殊算子;改进选择和复制方法等。但是,万变不离其宗,都是基于自然界生物进化,提出的这些改进方法。 2遗传算法的原理 遗传算法是从某一个初始种群开始,首先计算个体的适应度,然后通过选择、交叉、变异等基本操作,产生新一代的种群,重复这个过程,直到得到满足条件的种群或达到迭代次数后终止。通过这个过程,后代种群会更加适应环境,而末代种群中的最优个体,在经过解码之后,就可以作为问题的近似最优解了。 2.1遗传算法的四个组成部分 遗传算法主要由四个部分组成[1]:参数编码和初始群体、适应度函数、遗传操作和控制参数。编码方法中,最常用的是二进制编码,该方法操作简单、便于用模式定理分析。适应度函数是由目标函数变换而成的,主要用于评价个体适应环境的能力,是选择操作的依据。遗传操作主要包括了选择、交叉、变异等三种基本操作。控制参数主要有:串长Z,群体大小size,交叉概率Pc,变异概率Pm等。目前对遗传算法的研究主要集中在参数的调整中,很多文献建议的参数取值范围一般是:size取20~200之间,Pc取0.5~1.0之间,Pm取0~0.05之间。 2.2遗传算法的基本操作步骤 遗传算法的基本操作步骤为: (1)首先,对种群进行初始化;(2)对种群里的每个个体计算其适应度值;(3)根据(2)计算的适应度,按照规则,选择进入下一代的个体;(4)根据交叉概率Pc,进行交叉操作;(5)以Pm为概率,进行变异操作;(6)判断是否满足停止条件,若没有,则转第(2)步,否则进入(7);(7)得到适应度值最优的染色体,并将其作为问题的满意解或最优解输出。 3遗传算法的应用 遗传算法的应用领域非常广泛,下面主要就遗传算法在优化问题、生产调度、自动控制、机器学习、图像处理、人工生命和数据挖掘等方面的应用进行介绍。 3.1优化问题 优化问题包括函数优化和组合优化两种。很多情况下,组合优化的搜索空间受问题规模的制约,因此很难寻找满意解。但是,遗传算法对于组合优化中的NP完全问题非常有效。朱莹等[2]提出了一种结合启发式算法和遗传算法的混合遗传算法来解决杂货船装载的优化问题中。潘欣等[3]在化工多目标优化问题中应用了并行遗传算法,实验结果表明该方法效果良好。王大东等[4]将遗传算法应用到了清运车辆路径的优化问题求解中,而且仿真结果表明算法可行有效。 3.2生产调度 在复杂生产调度方面,遗传算法也发挥了很大的作用。韦勇福等[5]将遗传算法应用到了车间生产调度系统的开发中,并建立了最小化完工时间目标模型,成功开发了车间生产调度系统模块,并用实例和仿真验证了该方法的可行性。张美凤等[6]将遗传算法和模拟退火算法相结合,提出了解决车间调度问题的混合遗传算法,并给出了一种编码方法以及建立了相应的解码规则。 3.3自动控制 在自动控制领域中,遗传算法主要用于求解的大多也是与优化相关的问题。其应用主要分为为两类,即离线设计分析和在线自适应调节。GA可为传统的综合设计方法提供优化参数。 3.4机器学习 目前,遗传算法已经在机器学习领域得到了较为广泛的应用。邢晓敏等[7]提出了将遗传算子与Michigan方法和基于Pitt法的两个机器学习方法相结合的机器学习方法。蒋培等[8]提出了一种基于共同进化遗传算法的机器学习方法,该方法克服了学习系统过分依赖于问题的背景知识的缺陷,使得学习者逐步探索新的知识。 3.5图像处理 图像处理是一个重要的研究领域。在图像处理过程中产生的误差会影响图像的效果,因此我们要尽可能地减小误差。目前,遗传算法已经在图像增强、图像恢复、图像重建、图像分形压缩、图像分割、图像匹配等方面应用广泛,详见参考文献[9]。 4结束语 遗传算法作为一种模拟自然演化的学习过程,原理简单,应用广泛,已经在许多领域解决了很多问题。但是,它在数学基础方面相对不够完善,还有待进一步研究和探讨。目前,针对遗传算法的众多缺点,也相继出现了许多改进的算法,并取得了一定的成果。可以预期,未来伴随着生物技术和计算机技术的进一步发展,遗传算法会在操作技术等方面更加有效,其发展前景一片光明。 参考文献 [1]周明,孙树栋.遗传算法原理及应用[M].国防工业出版社,1999,6. [2]朱莹,向先波,杨运桃.基于混合遗传算法的杂货船装载优化问题[J].中国船舰研究,2015:10(6):126-132. [3]潘欣,等.种群分布式并行遗传算法解化工多目标优化问题[J].化工进展,2015:34(5):1236-1240. [4]王大东,刘竞遥,王洪军.遗传算法求解清运车辆路径优化问题[J].吉林师范大学学报(自然科学版),2015(3):132-134. [5]韦勇福,曾盛绰.基于遗传算法的车间生产调度系统研究[J].装备制造技术,2014(11):205-207. [6]黄巍,张美凤.基于混合遗传算法的车间生产调度问题研究[J].计算机仿真,2009,26(10):307-310. [7]邢晓敏.基于遗传算法的机器学习方法赋值理论研究[J].软件导刊[J].2009,8(11):80-81. [8]蒋培.基于共同进化遗传算法的机器学习[J].湖南师范大学自然科学学报,2004,27(3):33-38. [9]田莹,苑玮琦.遗传算法在图像处理中的应用[J].中国图象图形学报,2007,12(3):389-396. [10]周剑利,马壮,陈贵清.基于遗传算法的人工生命演示系统的研究与实现[J].制造业自动化,2009,31(9):38-40. [11]刘晓莉,戎海武.基于遗传算法与神经网络混合算法的数据挖掘技术综述[J].软件导刊,2013,12(12):129-130. 作者简介:赵夫群(1982,8-),女,汉族,籍贯:山东临沂,咸阳师范学院讲师,西北大学在读博士,工作单位:咸阳师范学院教育科学学院,研究方向:三维模型安全技术。 摘要:遗传算法是一种非常重要的搜索算法,特别是在解决优化问题上,效果非常好。文章首先介绍了遗传算法的四个组成部分,以及算法的基本操作步骤,接着探讨了遗传算法的几个主要应用领域,包括优化、生产调度、机器学习、图像处理、人工生命和数据挖掘等。目前遗传算法以及在很多方面的应用中取得了较大的成功,但是它在数学基础方面相对还不够完善,因而需要进一步研究和完善。 关键词:遗传算法;优化问题;数据挖掘 67 --

相关文档
最新文档