Intelligent Design

Shannon’s Theory of Information

Most of us use the term “information” to describe some kind of, or a piece of knowledge.  When we say that so-and-so passed on such-and-such piece of information, we mean that so-and-so told us such-and-such that we did not know before.  However, we now know, thanks to what we have been told.  Information equals knowledge, in other words.


The first definition of information in Webster’s dictionary reflects this idea: information is “the communication or reception of knowledge or intelligence.”  This idea of information often confuses individuals at first when we begin to talk about information stored in a molecule.  It can be said that DNA stores the “know-how” for building molecules in the cell.  Yet since neither DNA nor the cellular machinery that receives its instruction set is a conscious agent, equating biological information with knowledge in this way did not seem to quite fit our Webster’s definition.  The dictionaries point to another common meaning of the term that would  apply to DNA. Webster’s, for instance, has a second definition that defines information as “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects.” Information, according to this definition, equals an arrangement or string of some kind of characters, specifically characters that accomplish a particular outcome or performs a function of communication.

In common usage, we refer not only to a sequence of English letters in a sentence, but also to a block of binary code in a software program as information.  A simple concept to understand and you need to before going on in this series.  In this particular sense, information does not require a conscious recipient of a message; it refers to a sequence of characters that produces some specific effect.  What the effect is, we do not need to know-although that is our ultimate goal to determine what that sequence of characters ultimately means.  This definition suggests a distinct sense in that DNA contains INFORMATION.  DNA contains “alternative sequences” of nucleotide bases and can produce a specific effect.

Now, for the conceptual point of this article.  Neither DNA nor the cellular machinery that uses the DNA information is conscious (directly perceptible to and under the control of the receptor of the information received).  However, neither is a paragraph in a book, or a section of software, or the hardware in the computer that “reads” it.  Clearly, software contains some kind of information.  I first began to think about the DNA enigma in about 1995, after having the opportunity to design software for a medical company.  At the time, as I began to consider the vast amount of processing and storage of a wide variety and types of information on computers and it’s retrieval in a recognizable manner, I began a study of the science of information storage, processing, and transmission called “information theory.”

Information theory was developed in the 1940s by a young MIT engineer and mathematician named Claude Shannon.  Shannon was studying an obscure branch of algebra and few if any people were paying attention.  Shannon had taken nineteenth-century mathematician George Boole’s (from which contains Boolean algebra) system of putting logical expressions in mathematical form and apply the categories of “true” or “false” to switches found in electronic circuits.  His master’s thesis was called “possibly the most important, and also the most famous, master’s thesis of the century.”[i]  It eventually became the foundation for digital-circuit and digital-computer theory.  Nor was Shannon finished laying these foundations. He continued to develop his ideas and eventually published “The Mathematical Theory of Communications.”  Scientific American later called it “the Magna Carta of the information age.”[ii] Shannon’s theory of information provided a set of mathematical rules for analyzing how symbols and characters are transmitted across communication channels.

As my interest in the origin of life increased with more contact with the medical field, I read more and more about Shannon’s theory of information.  I learned that his mathematical theory of information could be applied to DNA, but there was an unusual catch.  Shannon’s theory of information is based upon a fundamental intuition: information and uncertainty are inversely related.  The more informative the particular statement is, the more uncertainty that it gets rid of.  For example, I live in West Texas.  If you were to tell me that it might snow in January, that would not be a very informative statement.  In the 22 years I have lived here, we have had from a dusting of snow to ten inches 17 out of those 22 years.  So telling me that does not reduce uncertainty.  However, I know very little about what the weather will be like in Boise, Montana.  If you were to tell me that on May 18 last year, Boise had an unseasonably cold day resulting in a light dusting of snow that would be an informative statement.  It would tell me something I could not have predicted based upon what I know already— it would reduce my uncertainty about Boise weather on that day.

Claude Shannon wanted to develop a theory that could quantify the amount of information stored in or conveyed across a communication channel.  He did this in two steps: first by linking the concepts of information and uncertainty and second by linking these concepts to measures of probability.  According to Shannon, the amount of information conveyed (and the amount of uncertainty reduced) in a series of symbols or characters is inversely proportional to the probability of a particular event, symbol, or character occurring. OK, let us think about this for a minute and what it actually means.  I am sure examples would help.

Imagine rolling a six-sided die (single for a pair of dice).  Also, think about flipping a coin.  The die comes up on the number 6.


The coin lands on tails.


  • Before rolling the die, there were six possible outcomes.
  • Before flipping the coin, there were two possible outcomes.

The throwing (or formally ‘the cast’) of the die eliminated more uncertainty and, in Shannon’s theory, conveyed more information than the coin toss.  Notice that the more improbable event (the die coming up 6) conveys more information; this becomes a central point in the DNA enigma later.  By equating information with the reduction of uncertainty, Shannon’s theory implies a mathematical relationship between information and probability.  Specifically, it shows that the amount of information conveyed by an event is inversely proportional to the probability of its occurrence.  The greater the number of possibilities, the greater the improbability of any one event actually occurring, and therefore the more information that is transmitted when that particular possibility occurs.

Shannon’s theory also implies that information increases as a sequence of the characters in the information grows.  The probability of getting heads in a single flip of a fair coin is 1 in 2.  The probability of getting four heads in a row is 1/ 2 × 1/ 2 × 1/ 2 × 1/ 2, that is, (1/ 2) 4 or 1/ 16.  Therefore, the probability of attaining a specific sequence of heads and tails decreases as the number of trials increases.  The amount of information provided increases correspondingly.[iii]

Think of it this way, it might help.  A paragraph contains more information than the individual sentences of that make it up; a sentence contains more information than the individual words in that sentence.  All other things being equal, short sequences (sentences) have less information than long sequences (sentences).  Shannon’s theory explains why in mathematical terms: improbabilities multiply as the number of characters (and combination of possibilities) grows.  The important thing for Shannon was that his theory provided a way of measuring the amount of information in a system of symbols or characters.

His equations for calculating the amount of information present in a communication system could be readily applied to any sequence of symbols or coding system that used elements that functioned in a manner similar to alphabetic characters.  Within any given alphabet of x possible characters (where each character has an equal chance of occurring), the probability of any one of the characters occurring is 1 chance in x.  For instance, if a monkey could bang randomly on a simplified typewriter possessing only keys for the 26 English letters, and assuming he was a perfectly random little monkey, there would be 1 chance in 26 that he would hit any particular letter at any particular moment.


The greater the number of alphabetic characters in use in the system (the greater the value of x), the greater the amount of information conveyed by the occurrence of a specific character in a sequence.  In systems where the value of x is known, as in a code or language, mathematicians can generate precise measures of information using Shannon’s  equations.  The greater the number of possible characters that can be at each place in the sequence and the longer the sequence of characters, the greater the Shannon information associated with the sequence.

Remember I said there was a catch.  Well, here it is.  Shannon’s theory and his equations have provided a powerful way to measure the amount of information stored in a system or transmitted across a communication channel, but it has an important limit.  That limit is that Shannon’s theory did not, and could not, distinguish merely improbable sequences of symbols from those that conveyed a message or “produced a specific effect”— as Webster’s second definition puts it.

As one of Shannon’s collaborators, Warren Weaver, explained in 1949, “The word information in this theory is used in a special mathematical sense that must not be confused with its ordinary usage.  In particular, information must not be confused with meaning.”[iv]  Ok, I wonder what that means, I hear you saying.  Consider two sequences of alphabetic  characters: “In the beginning God created” “kd lse ebmgtxodq Pmw wpfuzjf” Both of these sequences has an equal number of characters.  Both are composed of the same 26-letter English alphabet, the amount of uncertainty eliminated by each letter (or space) is identical and the probability of producing each of those two sequences at random is identical.  Therefore, both sequences have an equal amount of information as measured by Shannon’s theory. So what is the difference?  One of these sequences communicates something, while the other one does not.  Why is that?

Clearly, the difference has something to do with the way the alphabetic characters are arranged.  In the first instance, the alphabetic characters are arranged in a precise way to take advantage of a preexistent convention or code— that of English vocabulary— in order to communicate something.  When those words were written in that specific sequence it was to invoked specific concepts— the concept of “In,” the concept of “beginning,” and so on— that has long been associated with specified arrangements of sounds and characters among English speakers and writers.  That specific arrangement allows those characters to perform a communication function.  In the second sequence, the letters are not arranged according to any established convention or code (except maybe that known as gobbledygook) and therefore is meaningless.


Since both sequences are composed of the same number of equally improbable characters, both sequences have a quantifiable amount of information as calculated by Shannon’s theory.  Nevertheless, the first of the two sequences of alphabetic characters has something— a specificity of arrangement— that enables it “to produce a specific effect” or to perform a function, whereas the second sequence does not.  That is the catch.  Shannon’s theory cannot distinguish functional or message-bearing sequences from random or useless ones.  The theory can only measure the improbability of the sequence as a whole. It can quantify the amount of functional or meaningful information that might be present in a given sequence of symbols or characters, but it cannot determine whether the sequence in question “produces a specific effect” or is in fact meaningful.

For this reason, information scientists will often say that Shannon’s theory only measures the “information-carrying capacity,” as opposed to the functionally specified information or “information content,” of a sequence of characters or symbols.  This generates an interesting paradox.  Long meaningless sequences of alphabetic characters can have more information than shorter meaningful sequences of alphabetic characters, as measured by Shannon’s information theory.

This suggests that there are important distinctions to be made when talking about information in DNA.  It is important to distinguish information defined as “a piece of knowledge known by a person” from information defined as “a sequence of characters or arrangements of something that produce a specific effect.”  The first of these two definitions of information does not apply to DNA, the second does.  However, it is also necessary to distinguish Shannon information from information that performs a function or conveys a meaning.  We must distinguish sequences of characters that are (a) merely improbable from sequences that are (b) improbable and specifically arranged to perform a function.  In other words, we must distinguish information-carrying capacity from functional information.  So what kind of information does DNA possess, Shannon information or some other?

For that we go back to:  Will be completed in a week

[i] Gardner, The Mind’s New Science, 11.

[ii] Horgan, “Unicyclist, Juggler and Father of Information Theory.”

[iii] Shannon, “A Mathematical Theory of Communication.” Information theorists found it convenient to measure information additively rather than multiplicatively. Thus, the common mathematical expression (I =– log2p) for calculating information converts probability values into informational measures through a negative logarithmic function, where the negative sign expresses an inverse relationship between information and probability.

[iv] Shannon and Weaver, The Mathematical Theory of Communication, 8.

Intelligent Design

Things to consider when thinking about ID

Obviously, the genome and the cell’s information-processing and storage system manifest many features— hierarchical filing, nested coding of information, context dependence of lower-level informational modules, sophisticated strategies for increasing storage density, etc.  We would expect to find these feature if they were intelligently designed and intelligently integrated to function together with the least amount of duplication of effort and waste as possible.  On the other hand, many of these incredibly complex features that have been discovered are not easily explained by the standard materialistic evolutionary mechanisms. Amoebas to Atheists is just a giant leap of faith.


On top of that, these incredible informational features are found not only in the highest-level multi-cellular organisms but also in single-celled prokaryotes.  This suggests an intriguing, if radical, possibility.  It suggests that intelligent design may have played a role in the origination of complex multi-cellular organisms, and that mutation and selection, along with those other undirected mechanisms of evolutionary change, do not completely account for the origin of these higher forms of life.

Might intelligent design have played a role in biological evolution— that is, in the origin or historical development of new living forms from simpler pre-existing forms?  Given the importance of information to living systems, and given that all forms of life, including the most complex multi-cellular organisms, display distinctive hints and concepts of intelligent design in their informational systems, there would now seem to be an increasing reason to consider this possibility.

The neo-Darwinian perspective has limitations, which leaves a number of research questions not being addressed.  We now know that organisms contain information of different types at every organizational level in the cell, including ontogenetic or structural information not encoded in DNA.  According to neo-Darwinism, new form and structure arise as the result of information-generating mutations in DNA.  How to reconcile these difficulties has caused some form of linguistic gymnastics on the part of neo-Darwinian’s leading proponents.

Neo-Darwinism has long assumed in its population-genetics models of evolutionary change, a number of things about genes that we now (because of extensive scientific experimentation) know to be incorrect.  As an example, these models assume that: 1) genetic information is context-independent (a specific codon produces one effect only), 2) that genes independently associate (they migrate to each other to build the appropriate proteins),  and 3) that genes can mutate indefinitely with little regard to extragenomic and other functional constraints (which has been shown that one mutation can destroy the existing cell).

I wanted to get some images of the above concepts but they are none that really look good and explain the concepts well.  When we get to describing the concepts, it will make more sense and I will be able to use some of the images.

In short, neo-Darwinism gives dormancy to the gene as the focus of biological change and innovation.  By doing so, it assumes a one-dimensional conception of biological information.  This way neo-Darwinism provides little reason to consider or investigate (and every reason to ignore) the additional tiers of information and other codes that reside beyond the gene and all through the cell.  Whereas, advocates of intelligent design not only acknowledge, but also expect to find, sophisticated modes of information storage and processing in the cell.  The theory of intelligent design treats the hierarchical organization of information as theoretically significant; advocates of the theory have naturally shown intense interest in the cell’s informational hierarchies and intricate modes of coding fond throughout the interior and even the cell wall itself.

A design-theoretic perspective tends to encourage questions about the hierarchies of information in life that neo-Darwinists tend to (or prefer to) ignore. Questions you have probably never, ever thought about, but which are necessary to answer the mystery of the mystery.  Questions such as:

  • Where exactly does this ontogenetic[1] information reside?
  • How does it affect the function of lower-level genetic modules?
  • How many types of information are present in the cell?
  • What part do they play in maintaining the integrity and functioning of the cell?
  • How much ontogenetic information is present in the cell?
  • How do we measure this information, given that it is often structural and dynamic rather than digital and static?
  • In addition, how mutable are various forms of non-DNA-based information— if at all?

We know that animal body plans[2] are static over long periods of time. Once we have an animal with segmented body or multiply eyes that stay in the ‘evolutionary’ tree for a time.  Is this morphological stasis[3] the result of constraints imposed upon mutability by the interdependence of informational hierarchies?  If so, what are they?  Are there other constraints— even probabilistic constraints operating at the level of the individual gene and proteins— that limit the transformative power of the selection and mutation mechanism?  Given the phenomenon of “phenotypic plasticity[4]” (individuals in a population with the same genotype that have different phenotypes) and the recurrence of similar variations in the same species, how much variability in organisms is actually the result of preprogramming as opposed to random mutations?  If these variations continue to recur, are they the result of genetic preprogramming?  If so, then where does the requisite information for these programs reside and how is it expressed chemically within the cell?  How many phenomena currently regarded as examples of so-called neo-Lamarckian processes[5] can be properly attributed to preprogrammed, intelligently designed, adaptive capacity?  All these questions arise naturally from a design-theoretic perspective and have little place in a neo-Darwinian framework and they continue to mock them

Then we have questions about the structure, function, and composition of living systems themselves.  Some are questions about the efficacy of various evolutionary mechanisms— questions about whether these mechanisms can explain various appearances of design as well as an actual designing intelligence.  Can mutation and selection produce by random selection of mutational happenstance to produce new body parts and structures?  Can selection and mutation produce novel genes and proteins needed to keep these new parts or structures working?  If not, are there mechanisms or features of life that impose limits on biological change?  On the other hand, are there perhaps other materialistic mechanisms that have the causal powers to produce novel forms of life? Moreover, what kinds of causal powers would that be?  If not, and if the pervasive hallmarks of intelligent design in complex organisms indicate actual design, what does that imply about what else we should find in living systems or the fossil record?

You can see where the direction of this series is going by now I am sure.  Is the science of Intelligent Design equal to or superior than neo-Darwinian mutational natural selection?  Which theory best explains what we know and understand of the past.  Which theory best explains what microbiologists are currently discovering?  We will follow that direction in this series of articles.  I may slip in a Biblical verse now and then, but only to indicate that what we are currently discussing was foretold years ago.  I will not base any of the scientific facts on creationism- that is completely different from Intelligent Design (ID).

Are there patterns in the fossil record indicative of intelligent activity?  If so, do they suggest that intelligent design played a role in the production of novel body plans or smaller innovations in form?  Is intelligent design necessary to begin to build new forms of life that exemplify higher taxonomic categories— such as orders, classes, and phyla?  Could some form of design be needed to build the new forms encompassed by lower taxonomic categories such as genera and species?  How much morphological change can undirected mechanisms such as selection and mutation produce, and at what point, if any, would intelligent design be need to participate?

Such questions move from whether intelligence played a role in the history of life to when, where, and how often intelligence acted.  If intelligent design played a role in the history of life after life’s initial origin, did that designing agency act gradually or discretely?  If a designing intelligence generates new forms by infusing new information into the biosphere, do we see evidence of that anywhere in the geological time scale?  If so, where, and how can we detect it?  Did it affect a gradual transformation of form from simple to more complex organisms?  Alternatively, did that intelligence affect more sudden transformations or innovations in biological form, thus initiating new and separate lines of genealogical descent?

In other words, is the history of life monophyletic or polyphyletic?  If polyphyletic, how many separate lines of descent or trees of life have existed during life’s history.  Would that explain the gaps in the fossil record, embryological development, comparative anatomy, and phylogenetic studies in that case?  If life’s history is polyphyletic, how wide are the envelopes of variability in the separate trees of life? Does that explain the ridiculous numbe rof cladograms that overlap and distract from one another.  Conversely, if undirected evolutionary mechanisms are sufficient to account for the origin of all new forms of life, is it possible that the pervasive signs of design in higher forms of life were preprogrammed to unfold from the origin of life itself?  If design was thus “front-loaded” in the first simple cell, what does that imply about the capacity of cells to store information for future adaptations?  And what should the structure and organization of the prokaryotic genome look like in that case?

Many of the preceding questions follow from considering the informational signature of intelligence in life.  The DNA and RNA and the proteins folds and ATP motors all are incredibly complex and difficult to see how they could arise by mutation alone.  But design arguments from irreducible complexity, suggest other kinds of research questions. Are specific molecular machines irreducibly complex?  If so, is intelligent design the only cause known to produce this feature of systems?  How can evolution explain the origin of the flagellar motor and other molecular machines?  Indeed, the controversy irreducible complexity has already motivated specific new lines of empirical inquiry and generated a number of new research questions.  Many of the lines of inquiry are admittedly radical from an orthodox neo-Darwinian point of view, and some scientists may not want to consider them. Nevertheless, they cannot argue that the scientists who do will have nothing to do.


A Genus can contain multiple Species, as well as a Family contains many Genus etc on up the chain.  We will discuss this classification system later in more detail and show how it is useless to indicate how species are related to others.

While the first three articles and this one have been more or less simplified

The remaining articles will at least require a minimum of a high school education to understand.  I will do my best to write in such a way as to be understandable as possible.  Enjoy the series.  The next one is the DNA enigma, The molecular Labyrinth, the origin of science and design, chance and pattern recognition, playing the odds, and much more.

[1] the origination and development of an organism, usually from the time of fertilization of the egg to the organism’s mature form

[2] an assemblage of morphological features shared among many members of a phylum-level group.  This term, usually applied to animals, envisages a “blueprint” encompassing aspects such as symmetry, segmentation and limb disposition. Evolutionary developmental biology seeks to explain the origins of diverse body plans

[3] unchanging body type implying constancy with little change to the basic defining characters.

[4] the ability of an organism to change its phenotype in response to changes in the environment

[5] the idea that an organism can pass on characteristics that it has acquired during its lifetime to its offspring (also known as heritability of acquired characteristics or soft inheritance)

Intelligent Design

Why ID part 3

Why ID 3

The first part:

Second part:

All right, here is the next section.  I hope you have read the footnotes on the previous articles, because we will end-up doing some discussions of them in this and future articles in the series. Defiantly read the footnotes in this article- (toda la información). These are the individuals who (because we are evolutionarily more advanced than they were) did not know as much as we do now.  Of course, they are the ones who created ideals, concepts and theories that eventual lead to physical laws that our wonderfully, overly intelligent scientists today still have been unable to completely understand. So what they try to do is discredit what everyone else accepts.  Understandable, you have to push the edges, but you should have a valid reason, design and purpose for pushing- other than to make your name famous.

So off we go again seeking the mystery of the mystery:


The origin of the first life continues to remain a small hole in the elaborate tapestry of naturalistic explanations. Laplace’s[1] nebular hypothesis provided additional support for a materialistic conception of the cosmos, however, it also complicated the attempts to explain life on earth in purely material terms- which so many wanted and still hope for.  Laplace’s theory strongly suggested that earth had once been too hot to sustain life, inferring the environmental conditions needed to support life could have existed only after the planet had cooled below the boiling point of water.  Because of this, the nebular hypothesis implied that life had not existed eternally, but instead appeared at a definite time in earth’s history.  Ernst Haeckel, for instance, in The History of Creation, stated, “We can, therefore, from these general outlines of the inorganic history of the earth’s crust, deduce the important fact that at a certain definite time life had its beginning on earth and that terrestrial organisms did not exist from eternity” (401).

To scientific materialists, life is usually regarded as an eternal given, a self-existent reality, like matter itself. (Pop goes the weasel.  Wanted to put an image here, but……).  But this no longer is a credible explanation for life on earth. There apparently was a time when there was no life on earth-and then BANG- life appeared. To many scientists of a materialistic turn of mind, this implied that life must have evolved from some non-living materials present on a cooling prebiotic earth. However, no one has, had and still haven’t a detailed explanation for how this might have happened. As Darwin himself noted in 1866, “Though I expect that at some future time the [origin] of life will be rendered intelligible, at present it seems to me beyond the confines of science[2].”


The problem with the current concept of the origin of life at this time was rendered more acute by the failure of  the growing concept of “spontaneous generation.” This was the idea that life originates continually from the remains of once living matter. Some kind of living matter dies off and another kind develops from the remaining materials it was composed of.  This theory suffered a series of setbacks during the 1860s because of the work of Louis Pasteur[3]. In 1860 and 1861, Pasteur demonstrated that micro-organisms or germs exist in our normal air and can multiply under favorable conditions[4].  He showed that if normal air enters sterile vessels, contamination of the vessels with microorganisms occurs. Pasteur argued that the observed “spontaneous generation” of mold or bacterial colonies on rotting food or dead meat could be explained by the failure of experimenters to prevent contamination with preexisting organisms from the atmosphere[5].  Pasteur’s work seemed to refute or deny the probability of the only naturalistic theory of life’s origin then under experimental scrutiny.

The doctrine of spontaneous generation did not die easily (as we will find that pattern for many useless theories). Even after Pasteur’s work, Henry Bastian[6] continued to find microbial organisms in various substances that had been sealed and “sterilized” at 100 degrees C or higher.  Not until the 1870s, when microbiologists like Cohn, Koch, and Tyndall perfected methods of killing heat-resistant spores, were Bastian’s observations discredited. Despite an increasingly critical scientific response to his experimental methods and conclusions, Bastian continued to offer observational evidence for spontaneous generation from inorganic matter for another thirty years.  Experiments supposedly establishing the spontaneous occurrence of microorganisms remained tenable only as long as sterilization methods were inadequate to kill the existing microorganisms or prevent bacterial contamination of experimental vessels from the surrounding environment.  When sources of microorganisms were identified and various methods of destroying them perfected, observational evidence for spontaneous generation was withdrawn or discredited.


In the minds of some scientists, after the turn of the century, continued experimentation  seemed to confirm that living matter is too complex to organize itself spontaneously, whether beginning from organic or inorganic predecessors.  Although Huxley and Haeckel accepted Pasteur’s results, both insisted that his work was not relevant to abiogenesis (life arising from nonliving matter), as his experiments discredited only theories of what Haeckel called “plasmogeny” or what Huxley called “heterogenesis,” i.e., spontaneous generation from once living matter[7].

Late-Victorian-era biologists expressed little, if any, concern about the absence of detailed explanations for how life had first arisen.  Life is there, but we do not know why nor do not want to research that particular line of inquiry.  As a contrarian, the obvious question for me was, Why not?

Further study found that these scientists actually had several reasons for holding  onto this point of view. Even though many scientists knew that Darwin had not solved the origin-of-life problem, they were confident that the problem could be solved because they were deeply impressed by the results of Friedrich Wöhler[8]’s experiment.  Before the nineteenth century, many biologists had taken it as almost axiomatic or self-evident that the matter out of which life was made was qualitatively different than the matter in nonliving chemicals.  These biologists thought living things possessed an immaterial essence or force, an élan vital (a creative life force present in all living things and responsible for evolution), that conferred a distinct and qualitatively different kind of existence upon organisms as opposed to say rocks[9].

Scientists who held this view were called “vitalists[10],” a group that included many pioneering biologists. Since this mysterious élan vital[11] was responsible for the distinctive properties of organic matter, vitalists also thought that it was impossible to change ordinary inorganic matter into organic matter. After all, the inorganic matter simply lacked the special ingredient— the immaterial right “stuff.” And we are not talking the “late 60’-70’s ‘right stuff’ of the astronauts.

That is why Wöhler’s experiment was so incredibly important and to some ‘simply marvelous.’  He showed that two different types of inorganic matter could be combined to produce organic matter, although a somewhat inglorious type. Wöhler’s experiment had a direct influence on the current thinking about the origin of life.  If organic matter could be formed in the laboratory by combining two inorganic chemical compounds, then perhaps organic matter could have formed the same way in nature in the distant past- even if completely by accident. If organic chemicals could arise from inorganic chemicals, then of course life (LIFE) itself could arise in the same way? (Right, of course it would have to be right).   If vitalism was as totally wrong as it now appeared to be, then what is life but a combination of chemical compounds that somehow developed intelligently following the 1st law of thermodynamics?

Developments in other scientific disciplines reinforced this trend in thought. In the 1850s, a German physicist named Hermann von Helmholtz[12], a pioneer in the study of heat and energy (thermodynamics), showed that the principle of conservation of energy applied equally to both living and nonliving systems. The conservation of energy (First law of Thermodynamics) is the idea that energy is neither created nor destroyed during physical processes such as burning or combustion or photosynthesis or metabolism, but merely converted to other forms (forms we may not be aware of but just give us time)

Take gasoline – not to far though.  After the energy used to refine it, the energy contained within it, is not destroyed; it is converted into heat (or thermal) energy (by blowing up, burning or used in an engine).  If in an engine then, after the spark is applied in the cylinders  the energy is turned into mechanical or kinetic energy to move the car.  Helmholtz demonstrated that this same principle of energy conservation applied to living systems.  How did he do that?  First he needed a subject to attach various electrodes to and then he measured the amount of heat that muscle tissues generated during exercise[13].  This experiment showed that although muscles consume chemical energy, they also expend energy in the work they perform and the heat that they generate.  These processes were in balance and supported what would become the “first law of thermodynamics”— energy is neither created nor destroyed.


Even before this first law of thermodynamics had been refined, Helmholtz used a version of it to effectively argue against vitalism.  If living organisms are not subject to energy conservation, if an immaterial and immeasurable vital force can provide energy to organisms “for free,” then perpetual motion[14] would be possible.  However, Helmholtz argued, we know from observation that it is impossible.


Other developments supported this critique of vitalism.  During the 1860s and 1870s scientists identified the cell as the energy converter of living organisms (we will explore the ATP synthesis later)   Experiments on animal respiration established the utility of chemical analysis for understanding respiration and other energetic processes in the cell (this has helped asthmatics, sports figures and astronauts)[15]. Since these new chemical analyses ended up accounting for all the energy that the cell used in metabolism, biologists increasingly thought it unnecessary to refer to ‘vital forces’.  As new scientific discoveries undermined long-standing vitalist doctrines, they unfortunately bolstered the confidence of scientific materialists.

German materialists, such as the biologist Ernst Haeckel[16], denied any qualitative distinction between life and nonliving matter: “We can no longer draw a fundamental distinction between organisms and anorgana [i.e., the nonliving].”  In 1858, in an essay entitled “The Mechanistic Interpretation of Life,” another German biologist, Rudolf Virchow[17], challenged vitalists to “point out the difference between chemical and organic activity.”  With vitalism in decline, Virchow boldly asserted his version of the materialist credo: “Everywhere there is mechanistic process only, with unbreakable necessity of cause and effect.[18]

Life processes could now be partially explained by various physical or chemical mechanisms. Since mechanisms involve material parts in motion and nothing more, this seems to mean that the current function of organisms could be explained by reference to matter and energy alone.  This encouraged scientific materialists to assume they could easily devise explanations for the origin of life as well (You’ve heard the old adage ‘A little knowledge is dangerous’ Well, this was surely the case). Haeckel himself (you read the foot note right- the racist and the one who cheated on his embryo drawings)  would be one of the first scientists to try. If life was composed solely of matter and energy, then what else besides matter in motion— material processes— could possibly be necessary to explain life’s origin?  For materialists such as Haeckel, it was inevitable that scientists would succeed in explaining how life had arisen from simpler chemical precursors and that they would do so only by reference to materialistic processes.

“Haeckel’s attitude, and that of other contemporary Darwinians, to the question of the origin of life was first and foremost an expression of their worldview. Abiogenesis was a necessary logical postulate within a consistent evolutionary conception that regarded matter and life as stages of a single historical continuum[19]” For Haeckel, finding a materialistic explanation for the origin of life was not just a scientific possibility; it was a philosophical imperative.

This basically started the concept of evolution on its steamroller act to dominate thought for the next 100 years. The essential concept for many scientists during this time was matter first, and the central image was increasingly that of evolution, of nature unfolding in an undirected way, with the main point being  Darwinian hypotheses suggesting the possibility of an unbroken evolutionary chain up to the present. (Details, we’ll fill them in later)  The origin of life was a gigantic missing link in that chain, but surely, it was thought, the gap would soon be bridged. Darwin’s theory, in particular, inspired many evolutionary biologists to begin formulating theories to solve the origin-of-life problem.  It started a movement to attempt to “extend evolution backward” in order to explain the origin of the first life.

Darwin’s theory for some unknown reason inspired confidence in other efforts of scientific endeavors for several reasons.  First, Darwin had established an important precedent.  He had shown that there was a plausible means by which organisms could gradually produce new structures and greater complexity by a purely undirected material process.  Why could not a similar process explain the origin of life from preexisting chemicals?


Darwin’s theory also implied that living species did not possess an essential and immutable nature.  Since Aristotle, most biologists had believed that each species or type of organism possessed an unchanging nature or form; many believed that these forms reflected a prior idea in the mind of a designer.  These life forms had a purpose and a reason for existence on earth.  However, Darwin argued that species can change— or “morph”— over time.  His theory challenged this ancient view of life.  Classification distinctions among species, genera, and classes did not reflect unchanging natures.  They now were rearranged in various ways and reflected perceived differences in features that organisms might possess only for a time.  This was temporary and not set in stone so it provided evolutionists and biologists a great deal of leeway in trying to categorize an species.

As Hull, a philosopher of biology, explains, Darwin posited “that species are not eternal but temporary, not immutable but quite changeable, not discrete but graduating imperceptibly through time one into another.[20]”  As Darwin himself said in On the Origin of Species, “I was much struck how entirely vague and arbitrary is the distinction between species and varieties…. Certainly no clear line of demarcation has yet been drawn between species and subspecies… or, again, between subspecies and well-marked varieties” (104, 107).

If Darwin was right, then it was futile to maintain rigid distinctions in biology based on ideas about unchanging forms or natures.  This reinforced the conviction that there was no impassable or unbridgeable divide between inanimate and animate matter. Chemicals could “morph” into cells, just as one species could “morph” into another.  John Tyndall[21] argued, “There does not exist a barrier possessing the strength of a cobweb to oppose the hypothesis which ascribes the appearance of life to that ‘potency of matter’ which finds expression in natural evolution”

Darwin’s theory also speculated on the importance of environmental conditions on the development of new forms of life.  If certain conditions arose that seemed favor one organism and its particular mutation or one form of life over another, those conditions would affect the development of a population through the mechanism of natural selection a theme that evolutionists such as Lamarck and Matthew had articulated in various ways since early in the nineteenth century.

This aspect of Darwin’s theory suggested that environmental conditions may have played a crucial role in making it possible for life to arise from inanimate chemistry.  It was in this context that Darwin himself first speculated about the origin of life.  In the 1871 letter to botanist Joseph Hooker[22], which is available in the Cambridge library archive, Darwin sketched out a purely naturalistic scenario for the origin of life.  He emphasized the role of special environmental conditions and just the right mixture of chemical ingredients as crucial factors in making the origin of life possible: “It is often said that all the conditions for the first production of a living organism are present…. But if (and Oh! what a big if!) we could conceive in some warm little pond, with all sorts of ammonia and phosphoric salts, light, heat, electricity, etc., that a protein compound was chemically formed ready to undergo still more complex changes, at the present day such matter would be instantly devoured or absorbed, which would not have been the case before living creatures were formed[23].”  Although Darwin conceded that his speculations ran well ahead of available evidence, the basic approach he outlined would seem increasingly plausible as a new theory about the nature of life came to prominence in the 1860s and 1870s.

In researching bio-evolution, you will eventually come across a statement by Russian scientist Aleksandr Oparin[24]. Oparin was the twentieth century’s undisputed pioneer of origin-of-life studies.  When you examine his comments, you can identify another key reason for the Victorian lack of concern about the origin-of-life problem.  “The problem of the nature of life and the problem of its origin have become inseparable,” he said[25].

To explain how life originated, scientists first have to understand what life is.  In and of itself, that was a major undertaking back then- still is today even with all of your high-tech instruments.  With the understanding of what life is, that in turn, defines what the theories of the origin of life must explain.

The Victorians were not especially concerned with the origin-of-life problem because they thought simple life was, by definition, simple.  They really did not think there was much to explain.  Biologists during this period assumed that the origin of life would eventually be explained as the by-product of a few simple chemical reactions kind of like photosynthesis (which at that time they still had no real idea how that worked).  During this time many do now, scientists appreciated that many intricate structures in plants and animals appeared designed, an appearance that Darwin flippantly explained as the result of natural selection and random variation.  However, for Victorian scientists, single-celled life did not look particularly designed, most obviously, because scientists at the time could not see individual cells in any detail. The big powerful electron and x-ray microscopes where still a number of years away.  Cells were viewed as “homogeneous and structure-less globules of protoplasm,[26]” amorphous sacs of chemical jelly, not intricate structures manifesting the appearance of design.  In the 1860s, a new theory of life encouraged this view.  It was called the “protoplasmic theory,” and it equated vital function with a single, identifiable chemical substance called protoplasm.  This strengthened the conviction among many scientists that vital function was ultimately reducible to a “physical basis.”[27]

According to this theory, the attributes of living things derived from a single substance located inside the walls of cells.  This idea was proposed as a result of several scientific developments in the 1840s and 1850s.  In 1846, a German botanist named Hugo von Mohl[28] demonstrated that plant cells contained a nitrogen-rich material, which he called protoplasm. [29]He also showed that plant cells need this material for viability. Mohl and Swiss botanist Karl Nägeli[30] later suggested that protoplasm was responsible for the vital function and attributes of plant cells and that the cell wall merely constituted an “investment lying upon the surface of the [cell] contents, secreted by the contents themselves.”[31]  This turned out to be fantastically inaccurate.  The cell wall is a separate and fascinatingly intricate structure containing a system of gates and portals that control traffic in and out of the cell.  Nevertheless, Mohl and Nägeli’s emphasis on the importance of the cell contents received support in 1850 when a biologist named Ferdinand Cohn[32] showed that descriptions of protoplasm in plants matched earlier descriptions of the “sarcode” found in the cavities of unicellular animals.[33] By identifying sarcode as animal-cell protoplasm, Cohn connected his ideas to Mohl’s. Since both plants and animals need this substance to stay alive, Cohn established that protoplasm was essential to all living organisms. When, beginning in 1857, a series of papers by scientists Franz Leybig, Heinrich Anton de Bary, and Max Shultze suggested that cells could exist without cellular membranes (though, in fact, we now know they cannot), scientists felt increasingly justified in identifying protoplasm as life’s essential ingredient[34].  Thus, in 1868 when the famous British scientist Thomas Henry Huxley declared in a much publicized address in Edinburgh that protoplasm constituted “the physical basis or matter of life” (emphasis in original), his assertion expressed a gathering consensus.[35] With the protoplasmic theory defining the chemical basis of life, it seemed plausible that the right chemicals, in the right environment, might combine to make the simple protoplasmic substance. If so, then perhaps the origin of life could be explained by analogy to simple processes of chemical combination, such as when hydrogen and oxygen join to form water. If water could emerge from the combination of two ingredients as different from water as hydrogen and oxygen, then perhaps life could emerge from the combination of simple chemical ingredients that by themselves bore no obvious similarity to living protoplasm.

The Chemical basis for the mystery of the mystery next.

Quite a few references in this article, is there not.  It is just to assist you in knowing what research I have done to write this series of articles.  The references to the various experimenters in various fields are to provide background knowledge of those contributing to the information.  (I do not know if you have noticed that a large number of them are German scientists.  If you were to continue studying in the History and Philosophy of Biology and Evolution you would be able to see how that led to the eugenics movement, Margaret Sanger and Planned Parenthood and the Nazi Reich’s interest in its application by law in the U.S.)  Not to worry, we will confine our studies to the History and Philosophy of Micro-Biology and its ability to define how life came from non-life, and germs to Germans.

[1] Remember him from Why Id part 2.  If not go back and look him up.

[2] Darwin, More Letters of Charles Darwin, 273.

[3] a French chemist and microbiologist renowned for his discoveries of the principles of vaccination, microbial fermentation and pasteurization. He is remembered for his remarkable breakthroughs in the causes and preventions of diseases. He created the first vaccines for rabies and anthrax. His medical discoveries provided direct support for the germ theory of disease and its application in clinical medicine. He is best known to the general public for his invention of the technique of treating milk and wine to stop bacterial contamination, a process now called pasteurization. He is regarded as one of the three main founders of bacteriology.

[4] Farley, The Spontaneous Generation Controversy, 103ff.; Lechevalier and Solotorovsky, Three Centuries of Microbiology, 35– 37.

[5] Farley, Spontaneous Generation Controversy, 103– 7, 114, 172; Lanham, Origins of Modern Biology, 268.

[6] an English physiologist and neurologist. Fellow of Royal Society in 1868. He was an advocate of the doctrine of abiogenesis. He believed he witnessed the spontaneous generation of living organisms out of non living matter under his microscope.

[7] Haeckel, The Wonders of Life, 115; Kamminga, “Studies in the History of Ideas,” 55, 60.

[8] a German chemist, best known for his synthesis of urea, but also the first to isolate several chemical elements.

[9] Glas, Chemistry and Physiology, 118..

[10] a discredited scientific hypothesis that “living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things”.  Where vitalism explicitly invokes a vital principle, that element is often referred to as the “vital spark”, “energy” or “élan vital”, which some equate with the soul.

[11] a term coined by French philosopher Henri Bergson in his 1907 book Creative Evolution, in which he addresses the question of self-organisation and spontaneous morphogenesis of things in an increasingly complex manner. Elan vital was translated in the English edition as “vital impetus.  It is a hypothetical explanation for evolution and development of organisms, which Bergson linked closely with consciousness – with the intuitive perception of experience and the flow of inner time

[12] a German physician and physicist. In physiology and psychology, he is known for his mathematics of the eye, theories of vision, ideas on the visual perception of space, color vision research, and on the sensation of tone, perception of sound. In physics, he is known for his theories on the conservation of energy, work in electrodynamics, chemical thermodynamics, and on a mechanical foundation of the


[13] Coleman, Biology in the Nineteenth Century, 129.

[14] Perpetual motion is motion of bodies that continues indefinitely. This is impossible because of friction and other energy-dissipating processes. A perpetual motion machine is a hypothetical machine that can do work indefinitely without an energy source. This kind of machine is impossible, as it would violate the first or second law of thermodynamics.

[15] Steffens, James Prescott Joule and the Concept of Energy, 129– 30; Glas, Chemistry and Physiology, 86.

[16] a German biologist, naturalist, philosopher, physician, professor, marine biologist, and artist who discovered, described and named thousands of new species, mapped a genealogical tree relating all life forms, and coined many terms in biology, including anthropogeny, ecology, phylum, phylogeny, stem cell, and Protista. Haeckel divided human beings into ten races, of which the Caucasian was the highest and the primitives were doomed to extinction. Haeckel claimed that Negros have stronger and more freely movable toes than any other race which is evidence that Negros are related to apes because when apes stop climbing in trees they hold on to the trees with their toes,  Haeckel became embroiled in charges of fraud from his drawings of embryology of vertebrates.  He was accused of misrepresenting the ages of the different embryos and the sizes of the parts of the embryos.

[17] a German physician, anthropologist, pathologist, prehistorian, biologist, writer, editor, and politician, known for his advancement of public health. He is known as “the father of modern pathology.”. He is also known as the founder of social medicine and veterinary pathology,

[18] Virchow, “On the Mechanistic Interpretation of Life,” 115.

[19] Fry, The Emergence of Life on Earth, 58.

[20] Hull, “Darwin and the Nature of Science,” 63– 80.

[21] a prominent Irish physicist. His initial scientific fame arose in the 1850s from his study of diamagnetism. Later he made discoveries in the realms of infrared radiation and the physical properties of air. Fragments of Science, 434

[22] was one of the greatest British botanists and explorers of the 19th century. He was a founder of geographical botany and Charles Darwin’s closest friend

[23] Darwin, “Letter to Hooker”; see also Darwin, Life and Letters, 18.

[24] a Soviet biochemist notable for his theories about the origin of life, and for his book The Origin of Life. He also studied the biochemistry of material processing by plants and enzyme reactions in plant cells. He showed that many food-production processes were based on biocatalysis and developed the foundations for industrial biochemistry.

[25] Oparin, Genesis and Evolutionary Development of Life, 7.

[26] Haeckel, The Wonders of Life, 135.

[27] Thomas H. Huxley phrased it in 1868 (“ On the Physical Basis of Life”). See also Geison, “The Protoplasmic Theory of Life”; and Hughes, A History of Cytology, 50.

[28] A German botanist and geologist.

[29] Geison, “The Protoplasmic Theory of Life,” 274.

[30] a Swiss botanist. He studied cell division and pollination but became known as the man who discouraged Gregor Mendel from further work on genetics.

[31] As cited in Geison, “The Protoplasmic Theory of Life,” 274.

[32] a German biologist. He is one of the founders of modern bacteriology and microbiology

[33] The descriptions matched those by Felix Dujardin in 1835 and Gabriel Gustav Valentin in 1836. See Geison, “The Protoplasmic Theory of Life”; Hughes, A History of Cytology, 40, 112– 13.

[34] Geison, “The Protoplasmic Theory of Life,” 276. Shultze, in particular, emphasized the importance of protoplasm based on his realization that lower marine animals sometimes exist in a “primitive membraneless condition” and on his identification of protoplasm as the source of vital characteristics like contractility and irritability.

[35] Geison comments that during this period, “The conviction grew that the basic unit of life was essentially a protoplasmic unit” (278). It was during this period that the term “protoplasm” gained wide usage.

Evillution, Intelligent Design

Why ID part 2

Why ID 2

In the first installment ( ) we left off with the concept of opposite worldviews.  We will still discuss some of the philosophy behind how these worldviews came to be at opposite ends of the spectrum with each other.


The main theory that pervades all scientific disciplines is that simple material entities governed by natural laws eventually produce chemical elements from elementary particles.  Then these elements swirling around in some kind of primordial environment (most call it soup) created complex molecules from these   simple chemical elements.  Then somehow, these inanimate chemicals became ALIVE!  These simple life forms survived all kinds of improbable events to combine into life that was more complex.  Finally conscious living beings developed and eventually morphed, mutated, naturally selected into YOU and ME.  In this view, matter comes first, and conscious mind arrives on the scene much later as a by-product of material processes and undirected evolutionary change.  “Chance” they say; “goo to you” mutation with natural selection picking the best of the lot by CHANCE.

The Greek philosophers (who were called atomists), such as Leucippus and Democritus, were perhaps the first Western thinkers to articulate something like this view in writing.[1]   The Enlightenment philosophers Thomas Hobbes and David Hume also later espoused this matter-first philosophy.[2]

Following the widespread acceptance of Darwin’s theory of evolution in the late nineteenth century, many modern scientists adopted this view-why is the subject for another series of articles, but essentially it is when in doubt anything should make sense.  This worldview has been called several things, depending upon which scientific study you have majored in:  naturalism or materialism, or sometimes-scientific materialism or scientific naturalism, in the latter case because many of the scientists and philosophers who hold this perspective think that scientific evidence supports it.

So this is brings up a number of questions.  Not for most of you who are reading this.  You have probably never imagined that such questions existed, let alone have or have not been answered.  That is the reason for my series; to open everyone’s minds to the facts that are out there but you are unaware of.  What are these questions?

Can the origin of life be explained purely by reference to material processes such as undirected chemical reactions or random collisions of molecules?

Can the origin of life be explained without recourse to the activity of a designing intelligence?

Who needs to invoke an unobservable designing intelligence to explain the origin of life, if observable material processes can produce life on their own?

On the other hand, if there is something about life that points to the activity of a designing intelligence, then that raises other philosophical possibilities.

Does a matter-first or a mind-first explanation best explain the origin of life?

Either way, the origin of life is an infinitely interesting scientific topic, but one that has raised incredible philosophical issues as well.

My insatiable desire for information when I was in high school and college blinded me to the only methodology bring taught at the time.  It was taught as the TRUTH with very little supporting information.  You might say they wanted us to believe what they were saying on a hope and a prayer.


So let us start unlocking the mystery of the mystery of all things.

Many of the founders of early modern science such as Johannes Kepler[3], Robert Boyle[4], and Isaac Newton[5] had deep religious conviction.  They believed that scientific evidence pointed to a rational mind behind the order and design they perceived in nature, which is so easy to observe all around us.

Many late-nineteenth-century scientists came to see the cosmos as an autonomous, self-existent, and self-creating system- matter was the most important thing.  It appeared to them that the cosmos required no transcendent cause, no external direction or design. Several of these nineteenth-century scientific theories actually provided some support for this perspective despite the fragility of the knowledge the theory was based upon.

In astronomy, for example, the French mathematician Pierre Laplace[6] offered an ingenious theory known as the “nebular hypothesis” to account for the origin of the solar system as the outcome of purely natural gravitational forces[7].

In geology, Charles Lyell[8] explained the origin of the earth’s most dramatic topographical features— mountain ranges and canyons— as the result of a slow, gradual, and completely naturalistic processes of change such as erosion or sedimentation[9].  This brought about the theories of plate tectonics.

In physics and cosmology, a belief in the infinity of space and time obviated any need to consider the question of the ultimate origin of matter- if it has always been there, then it never originated.  That obviously brings about many other questions, but it was and is easier to avoid them.

In biology, Darwin’s theory of evolution by natural selection suggested that an undirected process could account for the origin of new forms of life without divine any intervention, guidance, or design.  Again, the questions left unanswered and only partially explained were left until sometime in the future.

Collectively, these theories made it possible to explain all the salient events in natural history from before the origin of the solar system to the emergence of modern forms of life solely by reference to natural processes— unaided and unguided by any kind or type of designing mind or intelligence.  Matter has always existed and could in effect, arrange and rearrange itself into any combination that, by chance, would become more complex as time went on.

But does it!  Here we need to dive more into the philosophy of science and the underlying premises of how scientists determine things. We will be delving into some areas of history and science that many of you have never, ever thought about.  Fortunately, others do and what they have formulated is a deeper understanding of how and why you believe the way you do- whether rightly or wrongly.


Continue on enigmatic of challenge of seeking the mystery of the mystery.

continued at:

[1] Kirk and Raven, The Presocratic Philosophers.

[2] Hobbes, Leviathan; Hume, Dialogues Concerning Natural Religion.

[3] a German mathematician, astronomer, and astrologer. A key figure in the 17th century scientific revolution, he is best known for his laws of planetary motion, based on his works Astronomia nova, Harmonices Mundi, and Epitome of Copernican Astronomy. These works also provided one of the foundations for Isaac Newton’s theory of universal gravitation.

[4] a natural philosopher, chemist, physicist and inventor.  Boyle is largely regarded today the founder of modern chemistry, and one of the pioneers of modern experimental scientific method. He is best known for Boyle’s law, which describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system

[5] an English physicist and mathematician who is widely recognised as one of the most influential scientists of all time and a key figure in the scientific revolution. His book Philosophiæ Naturalis Principia Mathematica (“Mathematical Principles of Natural Philosophy”), first published in 1687, laid the foundations for classical mechanics. Newton made seminal contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for the development of calculus.  Newton’s Principia formulated the laws of motion and universal gravitation, which dominated scientists’ view of the physical universe for the next three centuries. By deriving Kepler’s laws of planetary motion from his mathematical description of gravity, and then using the same principles to account for the trajectories of comets, the tides, the precession of the equinoxes, and other phenomena.

[6] an influential French scholar whose work was important to the development of mathematics, statistics, physics and astronomy. He translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace. He restated and developed the nebular hypothesis of the origin of the Solar System and was one of the first scientists to postulate the existence of black holes and the notion of gravitational collapse.

[7] Laplace, (Vietnamese) Exposition du système du monde.

[8] a British lawyer and the foremost geologist of his day. He is best known as the author of Principles of Geology, which popularized the concept of uniformitarianism—the idea that the Earth was shaped by the same processes still in operation today. His scientific contributions included an explanation of earthquakes, the theory of gradual “backed up-building” of volcanoes, and in stratigraphy the division of the Tertiary period into the Pliocene, Miocene, and Eocene. He also coined the currently-used names for geological eras, Paleozoic, Mesozoic and Cenozoic.

[9] Lyell, Principles of Geology..

Didja Know

Didja Know 12-6-2016

(My comments like this)

Many physicists spend their days trying to prove Albert Einstein’s theories correct. One pair of theoretical physicists is hoping to test whether the father of modern physics just may have been wrong about the speed of light.

In his theory of special relativity, Einstein left a lot of wiggle room for the bending of space and time.  But his calculations, and most subsequent breakthroughs in modern physics, rely on the notion that the speed of light has always been a constant 186,000 miles per second.


But, what if it wasn’t always that way?  In a paper published in the November issue of the journal Physical Review D, physicists from the Imperial College London and Canada’s Perimeter Institute argue that the speed of light could have been much faster in the immediate aftermath of the Big Bang.  The theory, which could change the very foundation of modern physics, is expected to be tested empirically for the first time. (could change, get it, could, not sure but if I’m right it could)

“The idea that the speed of light could be variable was radical when first proposed, but with a numerical prediction, it becomes something physicists can actually test,” lead author João Magueijo, a theoretical physicist at Imperial College London, said in a statement.  “If true, it would mean that the laws of nature were not always the same as they are today.”  (just a numerical prediction you see, not proof, but if my equations match on both sides of the “=” sign, them I am predicting that it is true- not proving it, just predicting)

The theory of variable speed of light (VSL) was proposed by Dr. Magueijo two decades ago as an alternative to the more popular “inflation theory” – both offer possible solutions to the same fundamental problem.

Most cosmological theories state that the early universe was inconsistent in density – lumpy, if you will – as it expanded after the Big Bang.  The modern universe, by comparison, is thought to be relatively homogeneous.  For that to be possible, light particles would have to spread out to the edge of the universe and “even out” the energy lumps.  But if the speed of light was always constant, it would never have been able to catch up with the expanding universe.  (the universe is thought to be homogeneous, we don’t know for sure and have created theories with lots of imaginative ‘fudge factors’ built into them that proves what we are happy about)

Inflation theory, which suggests that the universe expanded rapidly before slowing down, provides one potential answer to the dilemma.  The early universe could have evened out just before expanding, physicists say, if special conditions were present at the time.  (that’s those special conditions aka ‘fudge factors’ to make both sides of the equation to match)


In 2003, Lori Valigra reported for The Christian Science Monitor:

But inflation, proposed by MIT physicist Alan Guth in the late 1970s, was never widely adopted by the British theoretical physics community.  And Magueijo claims that as an answer to various “cosmological problems … inflation had won by default.”  This propelled him to think about another solution.  (not widely adopted by the British.  Do you think the Brits ever agree with anyone?)

VSL offers a different inconstant: the speed of light.  According to Magueijo and colleagues, the speed of light could have been much faster in the early moments of cosmological time.  Fast enough, they say, to reach the distant reaches of the universe before slowing to the current rate. (what would have caused it to go faster, what are the possible consequences of something going faster that the speed of light [as we know it now], and what would have caused it to slow down and when would that have happened.  Gee no answers yet?)


Now, researchers hope to prove that theory by studying the cosmic microwave background (CMB).  Physicists have long used this radiative “afterglow” to glean new insights about the early universe.  And since cosmic structures leave imprints on the CMB as they fluctuate in density, scientists may someday be able to produce a “spectral index” of the universe. (still learning what the “afterglow” means and trying to understand it, and a long way from it. After 20 years and $3 billion dollars the James Webb space telescope is soon to be launched and the make the Hubble its second cousin)

If VSL theory is correct – if the speed of light really was faster after the Big Bang – the spectral index should come in at exactly 0.96478.  That’s not too far off from current estimates, Magueijo says. (so if his theory is correct, then the numeric index should be just about the same as it is now estimated at 0.96497.  Not too terribly different, but what do I know?)

“The theory, which we first proposed in the late-1990s, has now reached a maturity point – it has produced a testable prediction,” Magueijo said. “If observations in the near future do find this number to be accurate, it could lead to a modification of Einstein’s theory of gravity.”

Physicists have proposed a new experiment  ( to test their theory that Einstein was wrong about the speed of light being a constant, the foundation on which much of modern physics is based. (So how do you propose the test this- you write the following paper which you can read in pdf form)

The critical geometry of a thermal big bang

Niayesh Afshordi, Joao Magueijo

(Submitted on 9 Mar 2016 (v1), last revised 8 Nov 2016 (this version, v2))

We explore the space of scalar-tensor theories containing two non-conformalmetrics, and find a discontinuity pointing to a “critical” cosmological solution. Due to the different maximal speeds of propagation for matter and gravity,the cosmological fluctuations start off inside the horizon even without inflation, and will more naturally have a thermal origin (since there is never vacuum domination). The critical model makes an unambiguous, non-tuned prediction for the spectral index of the scalar fluctuations: nS=0.96478(64). Considering also that no gravitational waves are produced, we have unveiled the most predictive model on offer. The model has a simple geometrical interpretation as a probe 3-brane embedded in an EAdSE3 geometry.

(That was the abstract, below is the details and you can click on the link above to read it yourself)

Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics – Theory (hep-th)
Journal reference: Phys. Rev. D 94, 101301 (2016)
DOI: 10.1103/PhysRevD.94.101301
Cite as: arXiv:1603.03312 [gr-qc]
  (or arXiv:1603.03312v2 [gr-qc] for this version)


(I will be the first to admit I am not an expert in calculus or physics.  However, I do have more than a working understanding of the two subject matters.  I had the delightful experience of working with Dr. John Strand, PhD in astrophysics on a global positioning system for an oil well company.  He worked on the Apollo missions- in fact he was the one who developed the theory of sling-shoting the capsule around the moon to get enough speed to return to earth and saving fuel to provide more oxygen for the astronauts.  Without his theory, they would have just gone into never-never land [whoops space].  Check out John’s book.  “Pathways to the Planets: Memoirs of an Astrophysicist by John R. Strand.  It is only available as an e-book.  It is a remarkable read.

It took me about 8 hours to work the entire math, about 3 to look up the values of the common variables.  Using the ‘common constants’ I ended up with Magueijo’s value.  Using his vales for the variables, I got his value also.  This is somewhat disconcerting; I should have gotten a different value use the ‘common constants’.

On top of this, the entire article is his theory, it has no indication of an actual provable test to be performed.  It can’t be done.  He can use the values that the James Webb telescope will provide, once it has been launched, calibrated, tested and allowed to gather information. A number of years from now.

It seems to be a problem with scientists these days; announcing things before they have proof of it.  To me it is putting a dent in scientific research, as why should I investigate something, if when I have the facts, somebody else has already taken credit for it, albeit a little too soon and guessing instead of proving.  LEM)