I do not normally respond to people who conceal their name.. but here's an exception..
Dr J programmed an enhanced (100 long genomes instead of 10) model at [https://t.co/k0WLnZyyTR] complete with results showing successful 'evolution' of 1x100 long 01' (or HT') sequence after some 2000 - 4000 generations. Its not the 10x10 as originally proposed but lets face it 1x100 is the same thing, ok.. Then there was the Python problem..
I programmed the same model and my answer was NO.. My Python program had to be terminated at over 10000 generations.. with no tendency to converge on the specified pattern.. I thought Dr J's program was doing something else. On re-checking I discovered the Python program was flawed because unknown to me copies of elements of Python lists are not real copies.. they are pointers.. When corrected for this my Python translation of DrJ's program of the model does converge exactly as Dr J's program does for a single mutation with 75% chance of being favorable or neutral.
So is this a model of evolution.. not according to Fred Hoyle.. with rates of +ve mutation averaging 25% and perfect selection but I think it has all the required elements.. It turns out to be an excellent way of verifying Hoyle's analytical results by adjusting the parameters (no of mutations, selection/reproduction strength) and by re-running many times I should be able to plot the boundary which I predict must exist imposed by the 2nd Law.
Clearly natural selection in combination with built in repair and redundancy elements of the genetic code are effective in the preservation of DNA in real populations. We only need to look at familiar small mammals - voles, mice & rabbits to tell us that. What the model may show is the relationship between mutation frequency and selection strength to overcome normal adverse statistical events and what degree of evolution if any may occur. What degree, is the very same question tackled by Fred Hoyle in "The Mathematics of Evolution". His analysis is by rigorous analytical math modelling, as opposed to second law limitations as I have done . However he as a convinced evolutionist came to the same conclusion I and many others have. His words..
"When ideas are based on observations, as the Darwinian theory certainly is, it is usual for those ideas to be valid at least within the range of those observations. It is when extrapolations are made outside the range of observations that troubles may arise. So the issue that presented itself was to determine just how far the theory was valid and exactly why beyond a certain point it became invalid." Introduction p5
Thankful to Dr J for getting involved..
You're measuring something unrelated to evolution.
ReplyDeleteFirst of all, you're measuring the chance of getting all the coins in one full throw of the sequence.
If you coded this, that would only give you the first generation, some fully random individuals to start. You could do this say 100 times to create 100 individuals in the first generation.
Then, if you wanted a particular sequence, you'd score those individuals against how closely the resembled the sequence you were after and then you'd choose some high scoring pairs to generate the next generation.
The next generation is the important bit. You do NOT rethrow all the coins. Generally you take 2 high scoring parents and choose a crossover point. Then you make the child by copying the coins from the first parent up to the crossover point, and then copying the coins from the second parent up to the crossover point.
Once that's done, you typically re-throw a couple of coins. But notice, the rethrow might turn a H to an H, so no mutation actually happens.
We then repeat the process until we have a full second generation. We then repeat.
It's essential to notice that a child generation closely resembles the best of the parent generation. This is absolutely essential and is the core of inheritance and evolution.
The children will also vary slightly, due to the crossover and the FEW mutations they contain. These mutations will make some children better than others in the solution, which we then take only the best ones to make the next generation.
But the point is, we're taking the best solutions we have and tweaking them very slightly, not completely re-generating the solution from scratch every time.
This means our ability to search for a good solution is massively improved from your "re-throw everything every single time" model, which we know wouldn't work and have never proposed as a solution.
I have used this exact process to reproduce entire images, far more information than your 100 coins, in finite, even predictable time.
Hello Jim sorry for the long delay.. thank you for your considered response.
ReplyDeleteI think what you are proposing is exactly what engineers do in Monte Carlo methods that is optimisation of parameters. The problem I see is by effectively quarantining the correct symbols from further mutation you bias the result towards a solution. The genome is subject to random mutations anywhere including some advantageous sections.