Wednesday, March 9, 2016

Appendix 7.2 Modeling Evolution

Well it turns out my 100 coin toss model exceeds all expectations.. Yes the one proposed in Appendix 5 and which Dr J programmed (see App 6).

The program demonstrates that with a single mutation applied at random to all 10 genes (each 100 long) and with 50% selection of the best(fittest = largest count of 0101.. sequence) the model converges on the target 100 long string of 0101.. pattern in 2000 to 4000 generations. The advantage of this pattern is that it has an equal number of heads/tails which makes the improbability only a function of the order and not the number of heads or tails which over any large random sample will be approximately equal. In absolute entropy terms it means the set of macrostates with the largest number of microstates.

If we now note that a single mutation (toss of coin) at a random position in the 100 long gene has 50% chance of being correct and 50% chance of being in the right place = 25% chance favorable. It also has 2 x 25% chance of being neutral so leaving only a 25% chance of being unfavorable. This approximates Hoyle's description of what he calls the naively simplistic model widely accepted by evolutionary biologists and their followers.. its the single favorable mutation model.. and yes it behaves exactly as predicted. But now following Hoyle what happens if we introduce a second mutation.. ie make 2 random mutations at random positions at each new generation while still selecting 50% = 5 best genes..

Go ahead.. run it..

I let it run for two days and over 40 million generations and NO it does not converge just as Fred Hoyle's math analysis predicted.. The second mutation becomes overwhelmingly unfavorable to the completion of the whole series beyond a certain point.

You may think that a rather strange result.. by adding just one more mutation it completely annuls the power of 50% perfect selection..! So the real question it raises is; what exactly is the power of selection to 'create'? Consistent with standard probability rules the length of the genome has a big effect on the improbability of the final outcome as convergence can easily be obtained for say a 50 long string even with 2 mutations. Although it now becomes a statistical analysis problem.. it starts to look very much like it supports my original falsification in Ch 9 on the basis selection 'bias' may be so small as to be negligent for large genomes with large numbers of unfavorable mutations.. ie Fred Hoyle's conclusion.

2 comments:

  1. Two things. One: "high mutation rates are bad" isn't news, and Two: did you track mean fitness of your populations?

    It seems somewhat dishonest, when confronted with evidence that your hypothesis isn't actually valid, to simply "increase mutation rate" to a point where the model no longer works to perfection. Especially since 2% mutation per generation is a PHENOMENALLY high rate.

    Moreover, if you DO trace mean fitness, you'll see that even against such a deleterious mutation rate, mean fitness progressively increases, tending to gradually plateau to a point substantially below "perfection" but also significantly above starting fitness. Note: the previous model also did this, simply at a much higher mean fitness (as befits a lower mutation rate). The mechanism still works.

    ReplyDelete
    Replies
    1. Mean fitness being measured by the score reached for each generation. Yes it does increase and in the case of the 2 mutation model it plateaus at a point below "perfection". Yes 2% is a very high rate, however 50% perfect selection/reproduction is also unrealistically high.

      I agreed this is not a model of real world evolution.. but does explore the algorithm which is my purpose. The demonstration of plateauing for certain conditions does show the algorithm has limitations (as predicted by Hoyle) and the simplistic assumption.. {variation + inheritance + natural selection} is not automatically a sufficient condition to prove the theory.

      Delete