Natural selection in itself originates nothing. It can only select out of what is already present to be selected from. In order to be the driving engine of evolution, it needs a source of new raw material to be tested and either preserved for further experimentation or rejected. Much is written about genetic transposition and recombinationthe insertion, deletion, and duplication of the genes carried by the chromosomes, and their rearrangement into new permutations. And it is true that an enormous variety of altered programs for directing the form that an organism will assume can be produced in this wayfar greater than could ever be realized in an actual population. Lee Spetner, a former MIT physicist and information scientist who has studied the mathematics of evolution for forty years, calculates that the number of possible variations that could occur in a typical mammalian genome to be in the order of one followed by 24 million zeros. 23 (Yes, I did get that right. Not 24 orders of magnitude; 24 million orders of magnitude.) Of this, the fraction that could be stored in a population of a million, a billion, ten billion, or a hundred billion individualsit really doesn't make much differenceis so close to zero as to be negligible. And indeed this is a huge source of potential variety. But the attention it gets is misleading, since it's the same sleight of hand we saw before of presenting lots of discussion and examples of adaptive variations that nobody doubts, and assuming evolutionary transitions to be just more of the same. The part that's assumed is precisely what the exercise is supposed to be proving. For all that's going on, despite the stupendous number of combinations it can come up with, is reshuffling the genes that already make up the genome of the species in question. Recombination is a very real and abundant phenomenon, taking place through sexual mixing whenever a mating occurs and well able to account for the variation that we seeit's theoretically possible for two siblings to be formed from exactly complementary gametes (the half set of parental genes carried by a sperm or egg cell) from each parent, and thus to not share one gene in common. But it can't work beyond the species level, where inconceivably greater numbers of transitions are supposed to have happened, that we don't see.
The source of original variation that Darwin sought was eventually identified as the mechanism of genetic mutation deduced from Mendel's studies of heredity, which was incorporated into Darwinian theory in what became known in the 1930s as the neo-Darwinian synthesis. By the 1940s the nucleic acid DNA was known to be the carrier of hereditary information, and in 1953 James Watson and Francis Crick determined the molecule's double-helix structure with its "cross-rungs" of nucleotide base pairs that carry the genetic program. This program is capable of being misread or altered, leading the molecular biologist Jacques Monod, director of the Pasteur Institute, to declare in 1970 that "the mechanism of Darwinism is at last securely founded." 24 Let's take a deeper look, then, at what was securely founded.
An Automated Manufacturing City
Sequences of DNA base pairscomplementary arrangements of atoms that bridge the gap between the molecule's two "backbones" like the steps of a helical staircaseencode the instructions that direct the cellular protein-manufacturing machinery to produce the structural materials for building the organism's tissues, as well as molecules like hormones and enzymes to regulate its functioning. The operations that take place in every cell of the body are stupefyingly complex, embodying such concepts as realtime feedback control, centralized databanks, error-checking and correcting, redundancy coding, distributed processing, remote sensing, prefabrication and modular assembly, and backup systems that are found in our most advanced automated factories. Michael Denton describes it as a miniature city:
To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. . . . We would see all around us, in every direction we looked, all sorts of robot-like machines. We would notice that the simplest of the functional components of the cell, the protein molecules, were astonishingly complex pieces of molecular machinery, each one consisting of about three thousand atoms arranged in highly organized 3-D spatial conformation. We would wonder even more as we watched the strangely purposeful activities of these weird molecular machines, particularly when we realized that, despite all our accumulated knowledge of physics and chemistry, the task of designing one such molecular machinethat is one single functional protein moleculewould be completely beyond our capacity at present and will probably not be achieved until at least the beginning of the next century." 25
And this whole vast, mind-boggling operation can replicate itself in its entirety in a matter of hours. When this happens through the cell dividing into two daughter cells, the double-stranded DNA control tapes come apart like a zipper, each half forming the template for constructing a complete copy of the original DNA molecule for each of the newly forming cells. Although the copying process is monitored by error-detection mechanisms that surpass anything so far achieved in our electronic data processing, copying errors do occasionally happen. Also, errors can happen spontaneously or be induced in existing DNA by such agents as mutagenic chemicals and ionizing radiation. Once again the mechanism for repairing this kind of damage is phenomenally efficientif it were not, such being the ravages of the natural environment, no fetus would ever remain viable long enough to be bornbut at the end of the day, some errors creep through to become part of the genome written into the DNA. If the cell that an error occurs in happens to be a germ cell (sperm or egg), the error will be heritable and appear in all the cells of the offspring it's passed on to. About 10 percent of human DNA actually codes for structural and regulatory proteins; the function of the rest is not known. If the inherited copying error is contained in that 10 percent, it could (the code is highly redundant; for example, several code elements frequently specify the same protein, so that mutating one into another doesn't alter anything) be expressed as some physical or behavioral change.
Such "point mutations" of DNA are the sole source of innovation that the neo-Darwinian theory permits to account for all life's diversity. The theory posits the accumulation of tiny, insensible fluctuations to bring about all major change, since large variations would cause too much dislocation to be viable. They must occur frequently enough for evolution to have taken place in the time available; but if they occur too frequently no two generations would be the same, and no "species" as the basis of reproducing populations could exist. The key issue, therefore, is the rate at which the mutations that the theory rests on take place. More specifically, the rate of favorable mutations conferring some adaptive benefit, since harmful ones obviously contribute nothing as far as progress toward something better is concerned.
And here things run into trouble straight away, for beneficial mutations practically never happen. Let's take some of the well-known mutations that have been cataloged in studies of genetic diseases as examples.
All body cells need a certain amount of cholesterol for their membranes. It is supplied in packages of cholesterol and certain fats manufactured by the liver and circulated via the cardiovascular system. Too much of it in circulation, however, results in degeneration and narrowing of the large and medium-size arteries. Cholesterol supply is regulated by receptor proteins embedded in the membrane wall that admit the packages into the cell and send signals back to the liver when more is needed. The gene that controls the assembly of this receptor protein from 772 amino acids is on chromosome 19 and consists of about 45,000 base pairs. Over 350 mutations of it have been described in the literature. Every one of them is deleterious, producing some form of disease, frequently fatal. Not one is beneficial.
Another example is the genetic disease cystic fibrosis that causes damage to the lungs, digestive system, and in males the sperm tract. Again this traces to mutations of a gene coding for a transmembrane protein, this time consisting of 1,480 amino acids and regulating chloride ion transport into the cell. The controlling gene, called CFTR, has 250,000 base pairs to carry its instructions, of which over 200 mutations are at present known, producing conditions that range from severe lung infections leading to early deaths among children, to lesser diseases such as chronic pancreatitis and male infertility. No beneficial results have ever been observed.
"The Blind Gunman" would be a better description of this state of affairs. And it's what experience would lead us to expect. These programs are more complex than anything running in the PC that I'm using to write this, and improving them through mutation would be about as likely as getting a better word processor by randomly changing the bits that make up the instructions of this one.
The mutation rates per nucleotide that Spetner gives from experimental observations are between 0.1 and 10 per billion transcriptions for bacteria and 0.01 to 1 per billion for other organisms, giving a geometric mean of 1 per billion. 26 He quotes G. Ledyard Stebbins, one of the architects of the neo-Darwinian theory, as estimating 500 successive steps, each step representing a beneficial change, to change one species into another. To compute the probability of producing a new species, the next item required would be the fraction of mutations that are beneficial. However, the only answer here is that nobody knows for sure that they occur at all, because none has ever been observed. The guesses found here and there in the evolutionary literature turn out to be just thatpostulated as a hypothetical necessity for the theory to stand. (Objection: What about bacteria mutating to antibiotic-resistant strains? A well-documented fact. Answer: It can't be considered meaningful in any evolutionary sense. We'll see why later.)
But let's follow Spetner and take it that a potentially beneficial mutation is available at each of the 500 steps, and that it spreads into the population. The first is a pretty strong assumption to make, and there's no evidence for it. The second implies multiple cases of the mutation appearing at each step, since a single occurrence is far more likely to be swamped by the gene pool of the general population and disappear. Further, we assume that the favorable mutation that exists and survives to spread at every step is dominant, meaning that it will be expressed even if occurring on only one of the two parental chromosomes carrying that gene. Otherwise it would be recessive, meaning that it would have to occur simultaneously in a male and a female, who would then need to find each other and mate.
Even with these assumptions, which all help to oil the theory along, the chance that the postulated mutation will appear and survive in one step of the chain works out at around 1 in 300,000, which is less than that of flipping 18 coins and having them all come up heads. For the comparable thing to happen through all 500 steps, the number becomes one with more than 2,700 zeros.
Let's slow down for a moment to reflect on what that means. Consider the probability of flipping 150 coins and having them all come up heads. The event has a chance of 1 in 2150 of happening, which works out at about 1 in 1045 (1 followed by 45 zeros, or 45 orders of magnitude). This means that on average you'd have to flip 150 coins 1045 times before you see all heads. If you were superfast and could flip 150 coins, count them, and pick them up again all in one second you couldn't do it in a lifetime. Even a thousand people continuing nonstop for a hundred years would only get through 3 trillion flips, i.e., 3 x 1012still a long, long way from 1045.
So let's try simulating it on a circuit chip that can perform each flip of 150 coins in a trillionth of a second. Now build a supercomputer from a billion of these chips and then set a fleet of 10 billion such supercomputers to the task . . . and they should be getting there after somewhere around 3 million years. Well, the odds that we're talking about, of producing just one new species even with favorable assumptions all the way down the line, is over two thousand orders of magnitude more improbable than that.
This is typical of the kinds of odds you run into everywhere with the idea that life originated and developed by accumulated chance. Spetner calculates odds of 1 in 600 orders of magnitude against the occurrence of any instance of "convergent evolution," which is invoked repeatedly by evolutionists to explain physical similarities that by no stretch of the imagination can be attributed to common ancestry. The British astronomer Sir Fred Hoyle gives as 5 in 1019 the probability that one protein could have evolved randomly from prebiotic chemicals, and for the 200,000 proteins found in the human body, a number with 40,000 zeros. 27 The French scientist Lecomte de Nouy computed the time needed to form a single protein in a volume the size of the Earth as 10243 years. 28 These difficulties were already apparent by the mid sixties. In 1967 a symposium was held at the Wistar Institute in Philadelphia to debate them, with a dazzling array of fifty-two attendees from the ranks of the leading evolutionary biologists and skeptical mathematicians. Numbers of the foregoing kind were produced and analyzed. The biologists had no answers other than to assert, somewhat acrimoniously from the reports, that the mathematicians had gotten their science backward: Evolution had occurred, and therefore the mathematical problems in explaining it had to be only apparent. The job of the mathematicians, in other words, was not to assess the plausibility of a theory but to rubber-stamp an already incontestable truth.