Sunday, February 21, 2016

Appendix 7.1 Modeling Evolution

Fred Hoyle was a brilliant mathematician.. Professor of mathematics at Cambridge.. solved the nuclear synthesis of the heavy elements in the center of stars.. He was an anti-creationist and made a genuine attempt to math model the evolutionary algorithm analytically.. He differs from what he calls the "new believers" (in evolution by natural selection).. in one characteristic.. He told the truth in "The Mathematics of Evolution".. You need to hear what he had to say..

"Let us start naively with the feedback equation..
dx/dt = s.x    (t = time)  (1.1)

in which x is considered to be the fraction of some large population that possesses a particular property, 'A' say, the remaining fraction (1 - x) possessing a different property 'a', all the other individuals being otherwise similar to each other."

After integration to find x and some elaboration on the reproductive outcomes of this model for A being advantageous (s > 0) he gets..
x = xoexp(st)

"So it is agreed for s > 0, with A then a favorable property, that x rises to unity with all members of the population coming to possess it in a time span of ln xo/s generations..  ...  And if s < 0 the solution dies away in a time span of the order 1/s generations, thereby implying that if A is unfavorable it will be quickly rejected.

I am convinced it is this almost trivial simplicity that explains why the Darwinian theory of evolution is so widely accepted, why it has penetrated through the educational system so completely. As one student text puts it.. 'The theory is a two step process. First variation must exist in a population. Second the fittest members of the population have a selective advantage and are more likely to transmit their genes to the next generation.'

But what if individuals with a good gene A carry a bad gene B having the larger value of |s|. Does the bad gene not carry the good one down to disaster? What of the situation that bad mutations must enormously exceed good ones in number?"  (A fact acknowledged by all the research).. and so after some work he gets..
x  ~=  xo/( xo + exp(-st))      (1.6)

Unlike the solution to (1.1) for s > 0, x does not increase to unity ... but only to 1/2... Property A does not "fix" itself in the species in any finite number of generations. A residuum of individuals remain with the disadvantageous property 'a'."

My verification of this next..

Tuesday, February 16, 2016

Appendix 7 Modelling Evolution

First a note about probability and entropy..

The probability of any 'event' is determined by the 'prior' expectation of it (by a given process). (ie if coins are intelligently placed in order (HTHT.. etc.) then the probability of that arrangement would appear to be 1 because the outcome is certain). So by this it would appear the probability of achieving a state of matter depends upon how it is produced (process). But the absolute entropy of any state of matter is independent of the process that produced it! So it would appear the entropy cost of producing an end state is independent of the absolute entropy of the state itself.! Yet we know they must be dependently linked because any low entropy state must be accounted for by an increase in entropy (in the surroundings) and the only available way to get it is the process that produced it..?

How do we reconcile this apparent contradiction..

If the improbability of a state is reduced by a natural bias (like selection) or even direct intelligence the entropy of that state is not changed and the cost of that state must still be accounted for.. The answer must lie in that fact I have only considering the final steps (the placing or tossing of the coins) of a process which is in fact the result of a much larger 'system' that makes the outcome possible..

The considerations of where did the coins (or dice) come from.. what about the table, the room and even the person doing the placing. So is it valid to calculate the improbability simply by looking at the end result. That answer is in the term conditional probability. The probability of A given B is written Pr(A|B), (Noting that improbability is just 1/probability). So when I am calculating the probability of a sequence of 100 coins (leave the DNA for now).. It is actually..

Pr(100 HT pattern | coins, table, room, person, land, earth etc etc.) with all those 'givens'..

It is therefore possible to simply state the absolute entropy of the state is not changed by the process its just that I am calculating the conditional probability (or improbability) of the last part of the process (system) that produced it. Most important here is the fact that each conditional part in the process must result in an entropy increase which exceeds the drop in entropy that it creates. So the drop in entropy resulting from a person placing the coins in order is different to the drop in entropy resulting from tossing the coins randomly to get the same order. But the absolute entropy (~probability) of the final state is the same in both cases. They simply have a different set of 'givens'.

I hope that is clear enough..