Friday, December 11, 2015

Appendix 6.1 Dr J Verifies

So my 100 coin toss model of evolution turns out to be a reasonable illustration of how selection and replication work. By adjusting the parameters of the model it should be possible to plot the 2nd Law boundary which ultimately limits what any natural process can achieve.. Which is what Fred Hoyle actually showed in "The Mathematics of Evolution".. He said..

"evolution is correct in the small but not in the large.."  Introduction p6

Dr J's program of my model shows that my assumption: selection and reproduction have no effect on the probability of the final outcome is wrong!

However by mapping the effect of selection and reproduction I may be able to establish a relationship between (+ve mutation rates ~ selection strength ~ reproductive success) to the limits imposed by the 2nd Law. This should be possible because I discovered the unique number which sets the boundary for violation of the second law in any system where the improbability can be calculated (like with DNA). It may in the end require a fairly rigorous statistical approach but overall to date my 2nd Law boundary theory indicates it should agree quite well with Fred Hoyle's results in "The Mathematics of Evolution"..

So was my 'falsification' Ch 9 correct or not.. As far as the (corrected) program of the model goes.. I have to say not yet for evolution by natural selection, (but it should falsify the abio-genesis protein first model).

I now need to summarize the basic principles underlying all this..

(1)   The total improbability of any state of matter is quantified by its absolute entropy. For any ordered state of matter it represents the decrease in entropy from the state of complete disorder (equilibrium).  All forms of order are included.. physical (structure) and logical (information).

(2)   By the second law there must be a corresponding increase in entropy of the surroundings of the system which must be a direct consequence of the processes and events that created the state of order.

(3)   The rule of conditional probability allows the process to be analysed in parts which themselves must each obey the second law when all contributing processes are included.

Richard Feynman 1975 Caltech address warned scientists "before going public consider every conceivable way we might be wrong"..
Popular Science May 2015 "Nothing but the Truth".. strongly agree.

Friday, December 4, 2015

Appendix 6 Dr J Verifies

I do not normally respond to people who conceal their name.. but here's an exception..

Dr J programmed an enhanced (100 long genomes instead of 10) model at [https://t.co/k0WLnZyyTR] complete with results showing successful 'evolution' of 1x100 long 01' (or HT') sequence after some 2000 - 4000 generations. Its not the 10x10 as originally proposed but lets face it 1x100 is the same thing, ok.. Then there was the Python problem..

I programmed the same model and my answer was NO.. My Python program had to be terminated at over 10000 generations.. with no tendency to converge on the specified pattern.. I thought Dr J's program was doing something else. On re-checking I discovered the Python program was flawed because unknown to me copies of elements of Python lists are not real copies.. they are pointers.. When corrected for this my Python translation of DrJ's program of the model does converge exactly as Dr J's program does for a single mutation with 75% chance of being favorable or neutral.

So is this a model of evolution.. not according to Fred Hoyle.. with rates of +ve mutation averaging 25% and perfect selection but I think it has all the required elements.. It turns out to be an excellent way of verifying Hoyle's analytical results by adjusting the parameters (no of mutations, selection/reproduction strength) and by re-running many times I should be able to plot the boundary which I predict must exist imposed by the 2nd Law.

Clearly natural selection in combination with built in repair and redundancy elements of the genetic code are effective in the preservation of DNA in real populations. We only need to look at familiar small mammals - voles, mice & rabbits to tell us that. What the model may show is the relationship between mutation frequency and selection strength to overcome normal adverse statistical events and what degree of evolution if any may occur. What degree, is the very same question tackled by Fred Hoyle in "The Mathematics of Evolution". His analysis is by rigorous analytical math modelling, as opposed to second law limitations as I have done . However he as a convinced evolutionist came to the same conclusion I and many others have. His words..

"When ideas are based on observations, as the Darwinian theory certainly is, it is usual for those ideas to be valid at least within the range of those observations. It is when extrapolations are made outside the range of observations that troubles may arise. So the issue that presented itself was to determine just how far the theory was valid and exactly why beyond a certain point it became invalid."  Introduction p5

Thankful to Dr J for getting involved..

Thursday, December 3, 2015

Appendix 5.3 Richard Dawkins Verifies

This is actually by way of answer to Dr J (a keen supporter of Mr Dawkins) on Twitter.. It falls under this heading however as it is still in that sphere of simple probability modelling of supposed evolutionary processes oft referred to by Richard himself.

First the model Richard re-tweeted [tunartphoto.com].. Yes he's a photographer.. Title

Understanding Evolution With A Handful Of Dice


Take a handful of dice throw them.. select all but the 6's throw only them.. repeat.. voila all 6's = evolution..!  Really.. The end state is known and the probability of getting it = 1..!!

I think all this serves to demonstrate is an appalling state of ignorance.. undeserving of one who held the seat of Professor for Public Education in Britain..!

Dr J commented something very similar..
Repeat audience guessing of a coin toss.. kill all that get it wrong (about half) each time replace with 'descendants' = copies of those who got it right.. toss again replace those who got it wrong etc.. and repeat..
So by 2nd audience all are parents or 'descendants' of those who guessed right once..
By 3rd audience all are parents or 'descendants' of those who guessed right twice..
By nth audience all are parents or 'descendants' of those who guessed right n-1 times..

Firstly we must decide what Mr DJ means by 'breed'.. is that like a clone of a person who guessed right implying they have inherited the 'gene' for guessing coin tosses. If so how does this 'gene' work.. are they more likely to guess right.. Well in Dr J's own words "about half" will be eliminated each round.. so NO they do not have any greater chance of guessing the next toss right. Which all means you end up with exactly what you started with after each round.. About half get it right and half get eliminated.. In other words ITS GOING NOWHERE..!

Secondly the implication each successive half that guess right are also those who guessed right previously is clearly incorrect so in the end you cannot say how many any one of that audience guessed right in a row..

Here's A BETTER EXAMPLE.. Evolve a 100 coins into a HTHTHTHT.. pattern.
(The end result is only improbable because of the order not the count of Heads)

Take a population of 10 'genes' of 100 coins.. Toss all for starters..
1 - Score and Kill the 5 groups with least total runs of HTHT.. etc.
2 - Duplicate the top 5 so we have 10 with longest runs of HTHT..etc
3 - Randomly toss 1 or more coins from each gene. [no protected areas]
4 - Go back to 1 and repeat. [Top 5 mutated groups (genes) are retained]

What do you think happens..?

Sunday, October 18, 2015

Appendix 5.2 Richard Dawkins Verifies

These derivations go like this.. First you set the problem up in words.. then you translate those into math symbols, work the math to get an answer and then translate it back into words. So here goes..

                (n) 
E =  N .  \               1            =     N .  (2^n  -  1)        =        2N . (2^n  -  1)
               /          2^(m-1)                     2^(n-1)                                   2^n
               m=1

Now for n large enough (number of correct guesses ~ 8 will do) 
(2^n  - 1)  approx ~=   2^n   leaving     E   ~=  2N

Now for n = 8 (RD's example where he noted the audience was about 100)

E (from actual calculation above) = N . (255/128)

From simple probability..  n correct guesses at random will occur once every 2^n tosses if averaged over a very large number of tosses.

Hence the expected value of E is 2^8 = 256 so putting this value in above we get..

N . (255/128)  =  256    Therefore N = 128.502   or    129  (whole people)
This corresponds well with Richard Dawkins estimate at the time. Now look at this table..


n                  N                    2^n = E  
8                 129                  256
10               513                1024
15           16385              32768
20       524289          1048576

So N (rounded up) ~= E/2.. The audience number N determines E (the number of events) which (by my proof) = the required number to pay the entropy debt for the low entropy result (a number of heads guessed in a row)..
The simulation verifies how the entropy debt is paid for by the process.. QED

Saturday, October 17, 2015

Appendix 5.1 Richard Dawkins Verifies

This is all about an experiment Richard Dawkins did with an audience of young people to demonstrate improbable events happen all the time and the inclination to see the supernatural is in fact illogical.. Ref: [https://www.youtube.com/watch?v=H1TxH0zf07w]..

So lets see what this same experiment reveals when analysed as to conformance with the second law.. the basis of my falsification.. First I must say I agree the outcome is in no way supernatural.. and you do end up with an apparently remarkable result.. in the case given guessing 8 times in a row the outcome of the toss of a coin. But what does this actually demonstrate..?

Lets take the number of people participating as  = N
the number of correct guesses in a row  =  n

The total number of guess 'events' = N + N/2 + N/4 + ... +N/2^(n-1)  =  (lets call this) E

                (n) 
E =  N .  \                1          {ie N x (sum of the series  1/2^(n-1)  (n) times)}
               /             2^(m-1)
               m=1

Assuming half sit down after each toss of the coin.. because they got it wrong.

Now my falsification is based on the observation that the number of random changes (events or tosses of coin) required on average to get a certain improbable outcome is how the process balances the entropy decrease of that outcome (state) with an entropy increase in the surroundings. Its the entropy cost of the low entropy state which by the second law must be paid by that process.

So now we have the end result being the low entropy state of 8 correct guesses in a row which is equivalent to tossing 8 heads in a row or tossing 8 coins and ending up with 8 heads.

The probability of this outcome = 1/2^8  (or 1/2^n for 'n' correct guesses in a row)

So the question is, what is the audience size N to give enough events to pay the cost..?