NotPoliticallyCorrect

Home » Posts tagged 'Evolution'

Tag Archives: Evolution

Advertisements

Just-so Stories: The Brain Size Increase

1600 words

The increase in brain size in our species over the last 3 million years has been the subject of numerous articles and books. Over that time period, brain size increased from our ancestor Lucy, all the way to today. Many stories are proposed to explain how and why it exactly happened. The explanation is the same ol’ one: Those with bigger heads, and therefore bigger brains had more children and passed on their “brain genes” to the next generation until all that was left was bigger-brained individuals of that species. But there is a problem here, just like with all just-so stories. How do we know that selection ‘acted’ on brain size and thusly “selected-for” the ‘smarter’ individual?

Christopher Badcock, an evolutionary psychologist, as an intro to EP published in 2001, where he has a very balanced take on EP—noting its pitfalls and where, in his opinion, EP is useful. (Most may know my views on this already, see here.) In any case, Badcock cites R.D. Martin (1996: 155) who writes:

… when the effects of confounding variables such as body size and socio-economic status are excluded, no correlation is found between IQ and brain size among modern humans.

Badcock (2001: 48) also quotes George Williams—author of Adaptation and Natural Selection (1966; the precursor to Dawkins’ The Selfish Gene) where he writes:

Despite the arguments that have been advanced, I cannot readily accept the idea that advanced mental capabilities have ever been directly favored by selection. There is no reason for believing that a genius has ever been likely to leave more children than a man of somewhat below average intelligence. It has been suggested that a tribe that produces an occasional genius for its leadership is more likely to prevail in competition with tribes that lack this intellectual resource. This may well be true in the sense that a group with highly intelligent leaders is likely to gain political domination over less gifted groups, but political domination need not result in genetic domination, as indicated by the failure of many a ruling class to maintain its members.

In Adaptation and Natural Selection, Williams was much more cautious than adaptationists today, stating that adaptationism should be used only in very special cases. Too bad that adaptationists today did not get the memo. But what gives? Doesn’t it make sense that the “more intelligent” human 2 mya would be more successful when it comes to fitness than the “less intelligent” (whatever these words mean in this context) individual? Would a pre-historic Bill Gates have the most children due to his “high IQ” as PumpkinPerson has claimed in the past? I doubt it.

In any case, the increase in brain size—and therefore increase in intellectual ability in humans—has been the last stand for evolutionary progressionists. “Look at the increase in brain size”, the progressionist says “over the past 3mya. Doesn’t it look like there is a trend toward bigger, higher-quality brains in humans as our skills increased?” While it may look like that on its face, in fact, the real story is much more complicated.

Deacon (1990a) notes many fallacies that those who invoke the brain size increase across evolutionary history make, including: the evolutionary progression fallacy; the bigger-is-smarter fallacy; and the numerology fallacy. The evolutionary progression fallacy is simple enough. Deacon (1990a: 194) writes:

In theories of brain evolution, the concept of evolutionary progress finds implicit expression in the analysis of brain-size differences and presumed grade shifts in allometric brain/body size trends, in theories of comparative intelligence, in claims about the relative proportions of presumed advanced vs. primitive brain areas, in estimates of neural complexity, including the multiplication and differentiation of brain areas, and in the assessment of other species with respect to humans, as the presumed most advanced exemplar. Most of these accounts in some way or other are tied to problems of interpreting the correlates of brain size. The task that follows is to dispose of fallacious progressivist notions hidden in these analyses without ignoring the questions otherwise begged by the many enigmatic correlations of brain size in vertebrate evolution.

Of course, when it comes to the bigger-is-smarter fallacy, it’s quite obviously not true that bigger IS always better when it comes to brain size, as elephants and whales have larger brains than humans (also see Skoyles, 1999). But what they do not have more of than humans is cortical neurons (see Herculano-Houzel, 2009). Decon (1990a: 201) describes the numerology fallacy:

Numerology fallacies are apparent correlations that turn out to be artifacts of numerical oversimplification. Numerology fallacies in science, like their mystical counterparts, are likely to be committed when meaning is ascribed to some statistic merely by virtue of its numeric similarity to some other statistic, without supportive evidence from the empirical system that is being described.

While Deacon (1990a: 232) concludes that:

The idea, that there have been progressive trends of brain evolution, that include changes in the relative proportions of different structures (i.e., enlarging more “advanced” areas with respect to more primitive areas) and increased differentiation, interconnection, and overall complexity of neural circuits, is largely an artifact of misunderstanding the complex internal correlates of brain size. … Numerous statistical problems, arising from the difficulty of analyzing a system with so many interdependent scaling relationships, have served to reinforce these misconceptions, and have fostered the tacit assumption that intelligence, brain complexity, and brain size bear a simple relationship to one another.

Deacon (1990b: 255) notes how brains weren’t directly selected for, but bigger bodies (bigger bodies means bigger brains), and this does not lean near the natural selection fallacy theory for trait selection since this view is of the organism, not its trait:

I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed.

Deacon (1990b: 697-698) notes that the large brain-to-body size ratio in humans compared to other primates is an illusion “a surface manifestation of a complex allometric reorganization within the brain” and that the brain itself is unlikely to be the object of selection. The correlated reorganization of the human brain, to Deacon, is what makes humans unique; not our “oversized” brains for our body. While Deacon (1990c) states that “To a great extent the apparent “progress” of mammalian brain evolution vanishes when the effects of brain size and functional specialization are taken into account.” (See also Deacon, 1997: chapter 5.)

So is there really progress in brain evolution, which would, in effect, lend credence to the idea that evolution is progressive? No, there is no progress in brain evolution; so-called size increases throughout human history are an artifact; when we take brain size and functional specialization into account (functional specialization is the claim that different areas in the brain are specialized to carry out different functions; see Mahon and Cantlon, 2014). Our brains only seem like they’ve increased; when we get down to the functional details, we can see that it’s just an artifact.

Skoyles and Sagan (2002: 240) note that erectus, for example, could have survived with much smaller brains and that the brain of erectus did not arise for the need for survival:

So how well equipped was Homo erectus? To throw some figures at you (calculations shown in the notes), easily well enough. Of Nariokotome boy’s 673 cc of cortex, 164 cc would have been prefrontal cortex, roughly the same as half-brained people. Nariokotome boy did not need the mental competence required by cotemporary hunter-gatherers. … Compared to that of our distant ancestors, Upper Paleolithic technology is high tech. And the organizational skills used in hunts greatly improved 400,000 years ago to 20,000 years ago. These skills, in terms of our species, are recent, occurring by some estimates in less than the last 1 percent of our 2.5 million year existence as people. Before then, hunting skills would have required less brain power, as they were less mentally demanding. If you do not make detailed forward plans, then you do not need as much mental planning abilities as those who do. This suggests that the brains of Homo erectus did not arise for reasons of survival. For what they did, they could have gotten away with much smaller, Daniel Lyon-sized brains.

In any case—irrespective of the problems that Deacon shows for arguments for increasing brain size—how would we be able to use the theory of natural selection to show what was selected-for, brain size or another correlated trait? The progressionist may say that it doesn’t matter which is selected-for, the brain size is still increasing even if the correlated trait—the free-rider—is being selected-for.

But, too bad for the progressionist: If the correlated non-fitness-enhancing trait is being selected-for and not brain size directly, then the progressionist cannot logically state that brain size—and along with it intelligence (as the implication always is)—is being directly selected-for. Deacon throws a wrench into such theories of evolutionary progress in regard to human brain size. Though, looking at erectus, it’s not clear that he really “needed” such a big brain for survival—it seems like he could have gotten away with a much smaller brain. And there is no reason, as George Williams notes, to attempt to argue that “high intelligence” was selected-for in our evolutionary history.

And so, Gould’s Full House argument still stands—there is no progress in evolution; bacteria occupy life’s mode; humans are insignificant to the number of bacteria on the planet, “big brains”, or not.

Advertisements

Rampant Adaptationism

1500 words

Adaptationism is the main school of evolutionary change, through “natural selection” (NS). That is the only way for adaptations to appear, says the adaptationist: traits that were conducive to reproductive success in past environments were selected-for their contribution to fitness and therefore became fixated in the organism in question. That’s adaptationism in a nutshell. It’s also vacuous and tells us nothing interesting. In any case, the school of thought called adaptationism has been the subject of much criticism, most importantly, Gould and Lewontin (1972), Fodor (2008) and Fodor and Piatteli-Palmarini (2010). So, I would say that adaptationism becomes “rampant” when clearly cultural changes are conflated as having an evolutionary history and are still around today due to being adaptations.

Take Bret Weinstein’s recent conversation with Richard Dawkins:

Weinstein: “Understood through the perspective of German genes, vile as these behaviors were, they were completely comprehensible at the level of fitness. It was abhorrent and unacceptable—but understandable—that Germany should have viewed its Jewish population as a source of resources if you viewed Jews as non-people. And the belief structures that cause people to step onto the battlefields and fight were clearly comprehensible as adaptations of the lineages in question.”

Dawkins: “I think nationalism may be an even greater evil than religion. And I’m not sure that it’s actually helpful to speak of it in Darwinian terms.”

I find it funny that Weinstein is more of a Dawkins-ist than Dawkins himself is (in regard to his “selfish gene theory”, see Noble, 2011). In any case, what a ridiculous claim. “Guys, the Nazis were bad because of their genes and their genes made them view Jews as non-people and resources. Their behaviors were completely understandable at the level of fitness. But, Nazis bad!”

What a ridiculous claim. I like how Dawkins quickly shot the bullshit down. This is just-so storytelling on steroids. I wonder what “belief structures that cause people to step onto battlefields” are “adaptations of the lineages in question”? Do German belief structure adaptations different from any other groups? Can one prove that there are “belief structures” that are “adaptations to the lineages in question”? Or is Weinstein just telling just-so stories—stories with little evidence and that “fit” and “make sense” with the data we have (despicable Nazi behavior towards Jews after WWI and before and during WWII).

There is a larger problem with adaptationism, though: adaptationist confuse adaptiveness with adaptation (a trait can be adaptive without being an adaptation), they overlook nonadaptationist explanations, and adaptationist hypotheses are hard to falsify since a new story can be erected to explain the feature in question if one story gets disproved. That’s the dodginess of adaptationism.

An adaptationist may look at an organism, look at its traits, then construct a story as to why they have the traits they do. They will attempt to think of its evolutionary history by thinking of the environment it is currently in and what the traits in question that it has are useful for now. But there is a danger here. We can create many stories for just one so-called adaptation. How do we distinguish between which stories explain the fixation of the trait and which do not? We can’t: there is no way for us to know which of the causal stories explains the fixation of the trait.

Gould and Lewontin (1972) fault:

the adaptationist programme for its failure to distinguish current utility from reasons for origin (male tyrannosaurs may have used their diminutive front legs to titillate female partners, but this will not explain why they got so small); for its unwillingness to consider alternatives to adaptive stories; for its reliance upon plausibility alone as a criterion for accepting speculative tales; and for its failure to consider adequately such competing themes as random fixation of alleles, production of nonadaptive structures by developmental correlation with selected features (allometry, pleiotropy, material compensation, mechanically forced correlation), the separability of adaptation and selection, multiple adaptive peaks, and current utility as an epiphenomenon of nonadaptive structures.

[…]

One must not confuse the fact that a structure is used in some way (consider again the spandrels, ceiling spaces, and Aztec bodies) with the primary evolutionary reason for its existence and conformation.

Of course, though, adaptationists (e.g., evolutionary psychologists) do confuse structure for function. This is fallacious reasoning. That a trait is useful in a current environment is in no way evidence that it is an adaptation nor is it evidence that that’s why the trait evolved (e.g., a trait being useful and adaptive in a current environment).

But there is a problem with looking to the ecology of the organism in question and attempting to construct historical narratives about the evolution of the so-called adaptation. As Fodor and Piatteli-Palmarini (2010) note, “if evolutionary problems are individuated post hoc, it’s hardly surprising that phenotypes are so good at solving them.” So of course if an organism fails to secure a niche then that means that the niche was not for that organism.

That organisms are so “fit” to their environment, like a puzzle piece to its surrounding pieces, is supposed to prove that “traits are selected-for their contribution to fitness in a given ecology”, and this is what the theory of natural selection attempts to explain. Organisms fit their ecologies because its their ecologies that “design” their traits. So it is no wonder that organisms and their environments have such a tight relationship.

Take it from Fodor and Piatelli-Palmarini (2010: 137):

You don’t, after all, need an adaptationist account of evolution in order to explain the fact that phenotypes are so often appropriate to ecologies, since, first impressions to the contrary notwithstanding, there is no such fact. It is just a tautology (if it isn’t dead) a creature’s phenotype is appropriate for its survival in the ecology that it inhabits.

So since the terms “ecology” and “phenotype” are interdefined, is it any wonder why an organism’s phenotype has such a “great fit” with its ecology? I don’t think it is. Fodor and Piatteli-Palmarini (2010) note how:

it is interesting and false that creatures are well adapted to their environments; on the other hand it’s true but not interesting that creatures are well adapted to their ecologies. What, them, is the interesting truth about the fitness of phenotypes that we require adaptationism in order to explain? We’ve tried and tried, but we haven’t been able to think of one.

So the argument here could be:

P1) Niches are individuated post hoc by reference to the phenotypes that live in said niche.
P2) If the organisms weren’t there, the niche would not be there either.
C) Therefore there is no fitness of phenotypes to lifestyles that explain said adaptation.

Fodor and Piatteli-Palmarini put it bluntly about how the organism “fits” to its ecology: “although it’s very often cited in defence of Darwinism, the ‘exquisite fit’ of phenotypes to their niches is either true but tautological or irrelevant to questions about how phenotypes evolve. In either case, it provides no evidence for adaptationism.”

The million-dollar question is this, though: what would be evidence that a trait is an adaptation? Knowing what we now know about the so-called fit to the ecology, how can we say that a trait is an adaptation for problem X when niches are individuated post hoc? That right there is the folly of adaptationism, along with the fact that it is unfalsifiable and leads to just-so storytelling (Smith, 2016).

Such stories are “plausible”, but that is only because they are selected to be so. When such adaptationism becomes entrenched in thought, many traits are looked at as adaptations and then stories are constructed as to how and why the trait became fixated in the organism. But, just like EP which uses the theory of natural selection as its basis, so too does adaptationism fail. Nevermind the problem of the fitting of species to ecologies to render evolutionary problems post hoc; nevermind the problem that there is no identifying criteria for identifying adaptations; do mind the fact that there is no possible way for natural selection to do what it does: distinguish between coextensive traits.

In sum, adaptationism is a failed paradigm and we need to dispense with it. The logical problems with it are more than enough to disregard it. Sure, the fitness of a phenotype, say, the claws of a mole do make sense in the ecology it is in. But we only claim that the claws of a mole are adaptations after the fact, obviously. One may say “It’s obvious that the claws of a mole are adaptations, look at how it lives!” But this betrays a notion that Gould and Lewontin (1972) made: do not confuse structure with an evolutionary reason for its existence, which, unfortunately, many people do (most glaringly, evolutionary psychologists). Weinstein’s ridiculous claims about Nazi actions during WWII are a great example of how rampant adaptationism has become: we can explain any and all traits as an adaptation, we just need to be creative with the stories we tell. But just because we can create a story that “makes sense” and explains the observation does not mean that the story is a likely explanation for the trait’s existence.

The Modern Synthesis vs the Extended Evolutionary Synthesis

2050 words

The Modern Synthesis (MS) has entrenched evolutionary thought since its inception in the mid-1950s. The MS is the integreation of Darwinian natural selection and Mendelian genetics. Key assumptions include “(i) evolutionarily significant phenotypic variation arises from genetic mutations that occur at a low rate independently of the strength and direction of natural selection; (ii) most favourable mutations have small phenotypic effects, which results in gradual phenotypic change; (iii) inheritance is genetic; (iv) natural selection is the sole explanation for adaptation; and (v) macro-evolution is the result of accumulation of differences that arise through micro-evolutionary processes” (Laland et al, 2015).

Laland et al (2015) even have a helpful table on core assumptions of both the MS and Extended Evolutionary Synthesis (EES). The MS assumptions are on the left while the EES assumptions are on the right.

MSEES

Darwinian cheerleaders, such as Jerry Coyne and Richard Dawkins, would claim that neo-Darwinisim can—and already does—account for the assumptions of the EES. However, it is clear that that claim is false. At its core, the MS is a gene-centered perspective whereas the EES is an organism-centered perspective.

To the followers of the MS, evolution occurs through random mutations and change in allele frequencies which then get selected for by natural selection since they lead to an increase in fitness in that organism, and so, that trait that the genes ’cause’ then carry on to the next generation due to its contribution to fitness in that organism. Drift, mutation and gene flow also account for changes in genetic frequencies, but selection is the strongest of these modes of evolution to the Darwinian. The debate about the MS and the EES comes down to gene-selectionism vs developmental systems theory.

On the other hand, the EES is an organism-centered perspective. Adherents to the EES state that the organism is inseparable from its environment. Jarvilehto (1998) describes this well:

The theory of the organism-environment system (Jairvilehto, 1994, 1995) starts with the proposition that in any functional sense organism and environment are inseparable and form only one unitary system. The organism cannot exist without the environment and the environment has descriptive properties only if it is connected to the organism.

At its core, the EES makes evolution about the organism—its developmental system—and relegates genes, not as active causes of traits and behaviors, but as passive causes, being used by and for the system as needed (Noble, 2011; Richardson, 2017).

One can see that the core assumptions of the MS are very much like what Dawkins describes in his book The Selfish Gene (Dawkins, 1976). In the book, Dawkins claimed that we are what amounts to “gene machines”—that is, just vehicles for the riders, the genes. So, for example, since we are just gene machines, and if genes are literally selfish “things”, then all of our actions and behaviors can be reduced to the fact that our genes “want” to survive. But the “selfish gene” theory “is not even capable of direct empirical falsification” (Noble, 2011) because Richard Dawkins emphatically stated in The Extended Phenotype (Dawkins, 1982: 1) that “I doubt that there is any experiment that could prove my claim” (quoted in Noble, 2011).

Noble (2011) goes on to discuss Dawkins’ view that on genes:

Now they swarm in huge colonies, safe inside gigantic lumbering robots, sealed off from the outside world, communicating with it by tortuous indirect routes, manipulating it by remote control. They are in you and me; they created us, body and mind; and their preservation is the ultimate rationale for our existence. (1976, 20)

Noble then switches the analogy: Noble likens genes, not as having a “selfish” attribute, but to that of being “prisoners”, stuck in the body with no way of escape. Noble then says that, since there is no experiment to distinguish between the two views (which Dawkins admitted). Noble then concludes that, instead of being “selfish”, the physiological sciences look at genes as “cooperative”, since they need to “cooperate” with the environment, other genes, gene networks etc which comprise the whole organism.

In his 2018 book Agents and Goals in Evolution Samir Okasha distinguishes between type I and type II agential thinking. “In type 1 [agential thinking], the agent with the goal is an evolved entity, typically an individual organism; in type 2, the agent is ‘mother nature’, a personification of natural selection” (Okasha, 2018: 23). An example of type I agential thinking is Dawkins’ selfish genes, while type II is the personification that one imputes onto natural selection—which Okasha states that this type of thinking “Darwin was himself first to employ” (Okasha, 2018: 36) it.

Okasha states that each gene’s ultimate goal is to outcompete other genes—for that gene in question to increase its frequency in the organism. They also can have intermediate goals which is to maximize fitness. Okasha gives three rationales on what makes something “an agent”: (1) goal-directedness; (2) behavioral flexibility; and (3) adaptedness. So the “selfish” element “constitutes the strongest argument for agential thinking” of the genes (Okasha, 2018: 73). However, as Denis Noble has tirelessly pointed out, genes (DNA sequences) are inert molecules (and are one part of the developing system) and so do not show behavioral flexibility or goal-directedness. Genes can (along with other parts of the system working in concert with them) exert adaptive effects on the phenotype, though when genes (and traits) are coextensive, selection cannot distinguish between the fitness-enhancing trait and the free-riding trait so it only makes logical sense to claim that organisms are selected, not any individual traits (Fodor and Piatteli-Palmarini, 2010a, 2010b).

It is because of this, that the Neo-Darwinian gene-centric paradigm has failed, and is the reason why we need a new evolutionary synthesis. Some only wish to tweak the MS a bit in order to allow what the MS does not incorporate in it, but others want to overhaul the entire thing and extend it.

Here is the main reason why the MS fails: there is absolutely no reason to privilege any level of the system above any other! Causation is multi-level and constantly interacting. There is no a priori justification for privileging any developmental variable over any other (Noble, 2012, 2017). Both downward and upward causation exists in biological systems (which means that molecules depend on organismal context). The organism also able to control stochasticity—which is “used to … generate novelty” (Noble and Noble, 2018). Lastly, there is the creation of novelty at new levels of selection, like with how the organism is an active participant in the construction of the environment.

Now, what does the EES bring that is different from the MS? A whole bunch. Most importantly, it makes a slew of novel predictions. Laland et al (2016) write:

For example, the EES predicts that stress-induced phenotypic variation can initiate adaptive divergence in morphology, physiology and behaviour because of the ability of developmental mechanisms to accommodate new environments (consistent with predictions 1–3 and 7 in table 3). This is supported by research on colonizing populations of house finches [68], water fleas [132] and sticklebacks [55,133] and, from a more macro-evolutionary perspective, by studies of the vertebrate limb [57]. The predictions in table 3 are a small subset of those that characterize the EES, but suffice to illustrate its novelty, can be tested empirically, and should encourage deriving and testing further predictions.

[Table 3]

mseespred

There are other ways to verify EES predictions, and they’re simple and can be done in the lab. In his book Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics, philosopher of biology Jan Baedke notes that studies of epigenetic processes which are induced in the lab and those that are observed in nature are similar in that they share the same methodological framework. So we can use lab-induced epigenetic processes to ask evolutionary questions and get evolutionary answers in an epigenetic framework. There are two problems, though. One, that we don’t know whether experimental and natural epigenetic inducements will match up; and two we don’t know whether or not these epigenetic explanations that focus on proximate causes and not ultimate causes can address evolutionary explananda. Baedke (2018: 89) writes:

The first has been addressed by showing that studies of epigenetic processes that are experimentally induced in the lab (in molecular epigenetics) and those observed in natural populations in the field (in ecological or evolutionary epigenetics) are not that different after all. They share a similar methodological framework, one that allows them to pose heuristically fruitful research questions and to build reciprocal transparent models. The second issue becomes far less fundamental if one understands the predominant reading of Mayr’s classical proximate-ultimate distinction as offering a simplifying picture of what (and how) developmental explanations actually explain. Once the nature of developmental dependencies has been revealed, the appropriateness of developmentally oriented approaches, such as epigenetics, in evolutionary biology is secured.

Further arguments for epigenetics from an evolutionary approach can be found in Richardson’s (2017) Genes, Brains, and Human Potential (chapter 4 and 5) and Jablonka and Lamb’s (2005) Evolution in Four Dimensions. More than genes alone are passed on and inherited, and this throws a wrench into the MS.

Some may fault DST for not offering anything comparable to Darwinisim, as Dupre (2003: 37) notes:

Critics of DST complain that it fails to offer any positive programme that has achievements comparable to more orthodox neo-Darwinism, and so far this complaint is probably justified.

But this is irrelevant. For if we look at DST as just a part of the whole EES programme, then it is the EES that needs to—and does—“offer a positive programme that has achievements comparable to more orthodox neo-Darwinism” (Dupre, 2003: 37). And that is exactly what the EES does: it makes novel predictions; it explains what needs to be explained better than the MS; and the MS has shown to be incoherent (that is, there cannot be selection on only one level; there can only be selection on the organism). That the main tool of the MS (natural selection) has been shown by Fodor to be vacuous and non-mechanistic is yet another strike against it.

Since DST is a main part of the EES, and DST is “a wholeheartedly epigenetic approach to development, inheritance and evolution” (Griffiths, 2015) and the EES incorporates epigenetic theories, then the EES will live or die on whether or not its evolutionary epigenetic theories are confirmed. And with the recent slew of books and articles that attest to the fact that there is a huge component to evolutionary epigenetics (e.g., Baedke, 2018; Bonduriansky and Day, 2018; Meloni, 2019), it is most definitely worth seeing what we can find in regard to evolutionary epigenetics studies, since epigenetic changes induced in the lab and those that are observed in natural populations in nature are not that different. This can then confirm or deconfirm major hypotheses of the EES—of which there are many. It is time for Lamarck to make his return.

It is clear that the MS is lacking, as many authors have pointed out. To understand evolutionary history and why organisms have the traits they do, we need much more than the natural selection-dominated neo-Darwinian Modern Synthesis. We need a new synthesis (which has been formulated for the past 15-20 years) and only through this new synthesis can we understand the hows and whys. The MS was good when we didn’t know any better, but the reductionism it assumes is untenable; there cannot be any direct selection on any level (i.e., the gene) so it is a nonsensical programme. Genes are not directly selected, nor are traits that enhance fitness. Whole organisms and their developmental systems are selected and propagate into future generations.

The EES (and DST along with it) hold right to the causal parity thesis—“that genes/DNA play an important role in development, but so do other variables, so there is no reason to privilege genes/DNA above other developmental variables.” This causal parity between all tools of development is telling: what is selected is not just one level of the system, as genetic reductionists (neo-Darwinists) would like to believe; it occurs on the whole organism and what it interacts with (the environment); environments are inherited too. Once we purge the falsities that were forced upon us by the MS in regard to organisms and their relationship with the environment and the MS’s assumptions about evolution as a whole, we can then truly understand how and why organisms evolve the phenotypes they do; we cannot truly understand the evolution of organisms and their phenotypes with genetic reductionist thinking with sloppy logic. So who wins? The MS does not, since it has causation in biology wrong. This only leaves us with the EES as the superior theory, predictor, and explainer.

Book Review: “Lamarck’s Revenge”

3500 words

I recently bought Lamarck’s Revenge by paleobiologist Peter Ward (2018) because I went on a trip and needed something to read on the flight. I just finished the book the other day and I thought that I would give a review and also discuss Coyne’s review of the book since I know he is so uptight about epigenetic theories, like that of Denis Noble and Jablonka and Lamb. In Lamarck’s Revenge, Ward (2018) purports to show that Lamarck was right all along and that the advent of the burgeoning field is “Lamarck’s revenge” for those who—in the current day—make fun of his theories in intro biology classes. (When I took Bio 101, the professor made it a point to bring up Lamarck and giraffe necks as a “Look at this wrong theory”, nevermind the fact that Darwin was wrong too.) I will go chapter-by-chapter, give a brief synopsis of each, and then discuss Coyne’s review.

In the introduction, Ward discusses some of the problems with Darwinian thought and current biological understanding. The current neo-Darwinian Modern Synthesis states that what occurs in the lifetime of the organism cannot be passed down to further generations—that any ‘marks’ on the genome are then erased. However, recent research has shown that this is not the case. Numerous studies on plants and “simpler” organisms refute the notion, though for more “complex” organisms it has yet to be proved. However, that this discussion is even occurring is proof that we are heading in the right direction in regard to a new synthesis. In fact, Jablonka and Lamb (2005) showed in their book Evolution in Four Dimensions, that epigenetic mechanisms can and do produce rapid speciation—too quick for “normal” Darwinian evolution.

Ward (2018: 3-4) writes:

There are good times and bad times on earth, and it is proposed here that dichotomy has fueled a coupling of times when evolution has been mainly through Darwinian evolution and others when Lamarckian evolution has been dominant. Darwinian in good times, Lamarckian in bad, when bad can be defined as those times when our environments turn topsy-turvy, and do so quickly. When an asteroid hits the planet. When giant volcanic episodes create stagnant oceans. When a parent becomes a sexual predator. When our industrial output warms the world. When there are six billion humans and counting.

These examples are good—save the one about when a parent becomes a sexual predator (but if we accept the thesis that what we do  and what happens to us can leave marks on our DNA that don’t change it but are passed on then it is OK)—and they all point to one thing: when the environment becomes ultra-chaotic. When such changes occur in the environment, that organism needs a physiology that is able to change on-demand to survive (see Richardson, 2017).

Ward (2018: 8) then describes Lamarck’s three-step process:

First, an animal experienced a radical change of the environment aroujnd it. Second, the initial response to the environmental change was some new kind of behavior by that of the animal (or whole species). Third, the behavioral change was followed by morphological change in subsequent generations.

Ward then discusses others before Darwin—Darwin’s grandfather Erasmus, for instance—who had theories of evolution before Darwin. In any case, we went from a world in which a God created all to a world where everything we see was created by natural processes.

Then in Chapter 2, Ward discusses Lamarck and Darwin and each of their theories in turn. (Note that Darwin did have Lamarckian views too.) Ward discusses the intellectual dual between Lamarck and Georges Cuvier, the father of the field of comparative anatomy—he studied mass extinctions. At Lamarck’s funeral, Cuvier spoke bad about Lamarck and buried his theories. (See Cuvier’s (1836) Elegy of Lamarck.) These types of arguments between academics have been going on for hundreds of years—and they will not stop any time soon.

In Chapter 3 Ward discusses Darwin’s ideas all the way to the Modern Synthesis, discussing how Darwin formulated his theory of natural selection, the purported “mechanism of evolution.” Ward discusses how Darwin at first rejected Lamarck’s ideas but then integrated them into future editions of On the Origin. We can think of this scenario: Imagine any environment and organisms in it. The environment rapidly shifts to where it is unrecognizable. The organisms in that environment then need to either change their behavior (and reproduce) or die. Now, if there were no way for organisms to change, say, their physiology (since physiology is dependent on what is occurring in the outside environment), then the species would die and there would be no evolution. However, the advent of evolved physiologies changed that. Morphologic and physiologic plasticity can and does help organisms survive in new environments—environments that are “new” to the parental organism—and this is a form of Lamarckism (“heritable epigenetics” as Ward calls it).

Chapter 4 discusses epigenetics and a newer synthesis. In the beginning of the chapter, Ward discusses a study he was a part of (Vandepas, et al, 2016). (Read Ward’s Nautilus article here.)

They studied two (so-called) different species of nautilus—one, nautilus pampilus, widespread across the Pacific and Indian Oceans and two, Nautilus stenomphalus which is only found at the Great Barrier Reef. Pompilus has a hole in the middle of its shell, whereas stenomphalus has a plug in the middle. Both of these (so-called) species have different kinds of anatomy—Pompilus has a hood covered with bumps of flesh whereas stenomphalus‘ hood is filled with projections of moss-like twig structures. So over a thirty-day period, they captured thirty nautiluses and snipped a piece of their tentacles and sequences the DNA found in it. They found that the DNA of these two morphologically different animals was the same. Thus, although the two are said to be different species based on their morphology, genetically they are the same species which leads Ward (2018: 52) to claim “that perhaps there are fewer, not more, species on Earth than science has defined.” Ward (2018: 53) cites a recent example—the fact that the Columbian and North American wooly mammoths “were genetically the same but the two had phenotypes determined by environment” (see Enk et al, 2011).

Now take Ward’s (2018: 58) definition of “heritable epigenetics”:

In heritable epigenetics, we pass on the same genome, but one marked (mark is the formal term for the place that a methyl molecule attaches to one nucleotide, a rung in the ladder of DNA) in such a way that the new organism soon has its own DNA swarmed by these new (and usually unwelcome) additions riding on the chromosomes. The genotype is not changed, but the genes carrying the new, sucker-like methyl molecules change the workings of the organism to something new, such as the production (or lack thereof) of chemicals necessary for our good health, or for how some part of the body is produced.

Chapter 5 discusses different environments in the context of evolutionary history. Environmental catastrophes that lead to the decimation of most life on the planet are the subject—something that Gould wrote about in his career (his concept of contingency in the evolutionary process). Now, going back to Lamarck’s dictum (first an environmental change, second a change in behavior, and third a change in phenotype), we can see that these kinds of processes were indeed imperative in the evolution of life on earth. Take the asteroid impact (K-Pg extinction; Cretaceous-Paleogene) that killed off the dinosaurs and threw tons of soot into the air, blocking out the sun making it effectively night (Schulte et al, 2010). All organisms that survived needed to eat. If the organism only ate in the day time, it would then need to eat at night or die. That right there is a radical environmental change (step 1) and then a change in behavior (step 2) which would eventually lead to step 3.

In Chapter 6, Ward discusses epigenetics and the origins of life. The main subject of the chapter is lateral gene transfer—the transmission of different DNA between genomes. Hundreds or thousands of new genes can be inserted into an organism and effectively change the morphology, it is a Lamarckian mechanism. Ward posits that there were many kinds of “genetic codes” and “metabolisms” throughout earth’s history, even organisms that were “alive” but were not capable of reproducing and so they were “one-offs.” Ward even describes Margulis’ (1967) theory of endosymbiosis as “a Lamarckian event“, which even Margulis accepts. Thus, the evolution of organisms is possible through lateral gene transfer and is another Lamarckian mechanism.

Chapter 7 discusses epigenetics and the Cambrian explosion. Ward cites a Creationist who claims that there has not been enough time since the 500 million year explosion to explain the diversity of body plans since then. Stephen Jay Gould wrote a whole book on this—Wonderful Life. It is true that Darwinian theory cannot explain the diversity of body plans, nor even the diversity of species and their traits (Fodor and Piatelli-Palmarini, 2010), but this does not mean that Creationism is true. If we are discussing the diversification of organismal life after mass extinctions, then Darwinian evolution cannot have possibly played a role in the survival of species—organisms with adaptive physiologies would have had a better chance of surviving in these new, chaotic environments.

It is posited here that four different epigenetic mechanisms presumably contributed to the great increase in both the kinds of species and the kinds of morphologies that distinguished them that together produced the Cambrian explosion as we currently know it: the first, now familiar, methylation; second, small RNA silencing; third, changes in the histones, the scaffolding that dictates the overall shape of a DNA molecule; and, finally, lateral gene transfer, which has recently been shown to work in animals, not just microbes. (Ward, 2018: 113)

Ginsburg and Jablonka (2010) state that “[associative] learning-based diversification was
accompanied by neurohormonal stress, which led to an ongoing destabilization and re-patterning of the epigenome, which, in turn, enabled further morphological, physiological, and behavioral diversification.” So associative learning, according to Ginsburg and Jablonka, was the driver of the Cambrian explosion. Ward (2018: 115) writes:

[The paper by Ginsburg and Jablonka] says that changes of behavior by both animal predators and animal prey began as an “arms race” in not just morphology but behavior. Learning how to hunt or flee; detecting food and mats and habitats at a distance from chemical senses of smell or vision, or from deciphering vibrations coming through water. Yet none of that would matter if the new behaviors and abilities were not passed on. As more animal body plans and the species they were composed of appeared, ecological communities changed radically and quickly. The epigenetic systems in snimals were, according to the authors, “destabilized,” andin reordering them it allowed new kinds of morphology, physiology, and again behavior, ans amid this was the ever-greater use of powerful hormone systems. Seeinf an approaching predator was not enough. The recognition of imminent danger would only save an animal’s life if its whole body was alerted and put on a “war footing” by the flooding of the creature with stress hormones. Poweful enactors of action. Over time, these systems were made heritable and, according to the authors, the novel evolution of fight or flight chemicals would have greatly enhanced survivability and success of early animals “enabled animals to exploit new niches, promoted new types of relations and arms races, and led to adaptive repsonses that became fixed through genetics.”

That, and vision. Brains, behavior, sense organs and hormones are tied to the nervous system to the digestive system. No single adaption led to animal success. It was the integration of these disparate systems into a whole that fostered survivability, and fostered the rapid evolution of new kinds of animals during the evolutionary fecund Cambrian explosion.

So, ever-changing environments are how physiological systems evolved (see Richardson, 2017: Chapters 4 and 5). Therefore, if the environment were static, then physiologies would not have evolved. Ever-changing environments were imperative to the evolution of life on earth. For if this were not the case, organisms with complex physiologies (note that a physiological system is literally a whole complex of cells) would never have evolved and we would not be here.

In chapter 8 Ward discusses epigenetic processes before and after mass extinctions. He states that, to mass extinction researchers, there are 3 ways in which mass extinction have occurred: (1) asteroid or comet impact; (2) greenhouse mass extinction events; and (3) glaciation extinction events. So these mass extinctions caused the emergence of body plans and new species—brought on by epigenetic mechanisms.

Chapter 9 discusses good and bad times in human history—and the epigenetic changes that may have occurred. Ward (2018: 149) discusses the Toba eruption and that “some small group of survivors underwent a behavioral change that became heritable, producing cultural change that is difficult to overstate.” Environmental change leads to behavioral change which eventually leads to change in morphology, as Lamarck said, and mass extinction events are the perfect way to show what Lamarck was saying.

In chapter 10 Ward discusses epigenetics and violence, the star of the chapter being MAOA. Take this example from Ward (2018: 167-168):

Causing violent death or escaping violent death or simply being subjected to intense violence causes significant flooding of the body with a whole pharmacological medicine chest of proteins, and in so doing changes the chemical state of virtually every cell. The produces epigenetic change(s) that can, depending on the individual, create a newly heritable state that is passed on to the offspring. The epigenetic change caused by the fight-or-flight response may cause progeny to be more susceptible to causing violence.

Ward then discsses MAOA (pg 168-170), though read my thoughts on the matter. (He discusses the role of epigenetics in the “turning on” of the gene. Child abuse has been shown to cause epigenetic changes in the brain (Zannas et al, 2015). (It’s notable that Ward—rightly—in this chapter dispenses with the nature vs. nurture argument.)

In Chapter 11, Ward discusses food and famine changing our DNA. He cites the most popular example, that of the studies done on survivors who bore children during or after the famine. (I have discussed this at length.) In September of 1944, the Dutch ordered a nation-wide railroad strike. The Germans then restricted food and medical access to the country causing the deaths of some 20,000 people and harming millions more. So those who were in the womb during the famine had higher rates of disorders such as obesity, anorexia, obesity, and cardiovascular incidences.

However, one study showed that if one’s father had little access to food during the slow growth period, then cardiovascular disease mortality was low. But diabetes mortality was high when the paternal grandfather was exposed to excess food. Further, when SES factors were controlled for, the difference in lifespan was 32 years, which was dependent on whether or not the grandfather was exposed to an overabundance of food or lack of abundance of food just before puberty.

Nutrition can alter the epigenome (Zhang and Kutateladze, 2018), since it can alter the epigenome and the epigenome is heritable, then these changes can be passed on to future generations too.

Ward then discusses the microbiome and epigenetics (read my article for a primer on the microbiome, what it does, and racial differences in it). The microbiome has been called “the second genome” (Grice and Segre, 2012), and so, any changes to the “second genome” can also be passed down to subsequent generations.

In Chapter 12, Ward discusses epigenetics and pandemics. Seeing people die from horrible diseases of course has horrible effects on people. Yes, there were evolutionary implications from these pandemics in that the gene pool was decreased—but what of the effects on the survivors? Methylation impacts behavior and behavior impacts methylation (Lerner and Overton, 2017), and so, differing behaviors after such atrocities can be tagged on the epigenome.

Ward then takes the discussion on pandemics and death and shifts to religion. Imagine seeing your children die, would you not want to believe that there was a better place for them after death to—somewhat—quell your sorrow over their loss? Of course, having an epiphany about something (anything, not just religon) can change how you view life. Ward also discusses a study where atheists had different brain regions activated even while no stimulation was presented. (I don’t like brain imaging studies, see William Uttal’s books and papers.) Ward also discusses the VMAT2 gene, which “controls” mood through the production of the VMAT protein, elevating hormones such as dopamine and serotonin (similar to taking numerous illegal drugs).

Then in Chapter 13 he discusses chemicals and toxins and how they relate to epigenetic processes. These kinds of chemicals and toxins are linked with changes in DNA methylation, miroRNAs, and histone modifications (Hou et al, 2012). (Also see Tiffon, 2018 for more on chemicals and how they affect the epigenome.)

Finally, in Chapter 14 Ward discusses the future of evolution in a world with CRISPR-CAS9. He discusses many ways in which the technology can be useful to us. He discusses one study in which Chinese scientists knocked out the myostatin gene in 65 dog embryos. Twenty-seven of the dogs were born and only two—a male and a female—had both copies of the myostatin gene disrupted. This is just like when researchers made “double-muscle” cattle. See my article ‘Double-Muscled’ Humans?

He then discusses the possibility of “supersoldiers” and if we can engineer humans to be emotionless killing machines. Imagine being able to engineer humans that had no sympathy, no empathy, that looked just like you and I. CRISPR is a tool that uses epigenetic processes and, thus, we can say that CRISPR is a man-made Lamarckian mechanism of genetic change (mimicking lateral gene transfer).

Now, let’s quickly discuss Coyne’s review before I give my thoughts on the book. He criticizes Ward’s article linked above (Coyne admits he did not read the book), stating that his claim that the two nautiluses discussed above being the same species with the same genome and epigenetic forces leading to differences in morphology (phenotype). Take Coyne’s critique of Vandepas, et al, 2016—that they only sequenced two mitochondrial genes. Combosch et al (2017; of which Ward was a coauthor) write (my emphasis):

Moreover, previous molecular phylogenetic studies indicate major problems with the conchiological species boundaries and concluded that Nautilus represents three geographically distinct clades with poorly circumscribed species (Bonacum et al, 2011; Ward et al, 2016). This is been reiterated in a more recent study (Vandepas et al, 2016), which concluded that N. pompilius is a morphologically variable species and most other species may not be valid. However, these studies were predominantly or exclusively based on mitochondrial DNA (mtDNA), an informative but often misleading marker for phylogenetic inference (e.g., Stöger & Schrödl 2013) which cannot reliably confirm and/or resolve the genetic composition of putative hybrid specimens (Wray et al, 1995).

Looks like Coyne did not look hard enough for more studies on the matter. In any case, it’s not just Ward that makes this argument—many other researchers do (see e.g., Tajika et al, 2018). So, if there is no genetic difference between these two (so-called) species, and they have morphological differences, then the possibility that seems likely is that the differences in morphology are environmentally-driven.

Lastly, Coyne was critical of Ward’s thoughts on the heritability of histone modification, DNA methylation, etc. It seems that Coyne has not read the work of philosopher Jan Baedke (see his Google Scholar page), specifically his book Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics along with the work of sociologist Maurizio Meloni (see his Google Scholar page), specifically his book Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics. If he did, Coyne would then see that his rebuttal to Ward makes no sense as Baedke discusses epigenetics from an evolutionary perspective and Meloni discusses epigenetics through a social, human perspective and what can—and does—occur in regard to epigenetic processes in humans.

Coyne did discuss Noble’s views on epigenetics and evolution—and Noble responded in one of his talks. However, it seems like Coyne is not aware of the work of Baedke and Meloni—I wonder what he’d say about their work? Anything that attacks the neo-Darwinian Modern Synthesis gets under Coyne’s skin—almost as if it is a religion for him.

Did I like the book? I thought it was good. Out of 5 stars, I give it 3. He got some things wrong, For instance, I asked Shea Robinson, author of Epigenetics and Public Policy: The Tangled Web of Science and Politics about the beginning of the book and he directed me to two articles on his website: Lamarck’s Actual Lamarckism (or How Contemporary Epigenetics is not Lamarckian) and The Unfortunate Legacy of Jean-Baptiste Lamarck. The beginning of the book is rocky, the middle is good (discussing the Cambrian explosion) and the end is alright. The strength of the book is how Ward discusses the processes that epigenetics occurs by and how epigenetic processes can occur—and help drive—evolutionary change, just as Jablonka and Lamb (1995, 2005) argue, along with Baedke (2018). The book is a great read, if only for the history of epigenetics (which Robinson (2018) goes into more depth, as does Baedke (2018) and Meloni (2019)).

Lamarck’s Revenge is a welcome addition to the slew of books and articles that go against the Modern Synthesis and should be required reading for those interested in the history of biology and evolution.

Gene-Selectionism vs. Developmental Systems Theory

2300 words

Two dominant theories exist in regard to development, the “gene’s eye view—gene selectionism (GS)—and the developmental view—developmental systems theory (DST). GS proposes that there are two fundamental processes in regard to evolution: replication and interaction. Replicators (the term was coined by Dawkins) are anything that is copied into the next generation whereas interactors (vehicles) are things that only exist to ensure the replicators’ survival. Thus, Dawkins (1976) proposes a distinction between the “vehicle” (organism) and its “riders/replicators” (the genes).

Gene selectionism

Gene selectionists propose a simple hypothesis: evolution through the differential survival of genes, its main premise being that the “gene” is “the ultimate, fundamental unit of natural selection.” Dusek (1999: 156) writes that “Gene selection claims that genes, not organisms, groups of organisms or species, are selected. The gene is considered to be the unit of selection.” The view of gene selectionists is best—and most popularly put—by Richard Dawkins’ seminal book The Selfish Gene (1976), in which he posits that genes “compete” with each other, and that our “selfish actions” are the result of our genes attempting to replicate to the next generation, relegating our bodies to disposable “vehicles” that only house the “replicators” (or “drivers).

Though, just because one is a gene selectionist does not necessarily mean that they are a genetic determinist (both views will be argued below). Gene selectionists are comitted to the view that genes make a distinctive contribution toward building interactors. Dawkins (1982) claims that genetic determinism is not a problem in regard to gene selectionism. Replicators (genes) have a special status to gene selectionists. Gene selectionists argue that adaptive evolution only occurs through cumulative selection, while only the replicators persist through the generations. Gene selectionists do not see organisms as replicators since genes—and not organisms—are what is replicated according to the view.

The gene selectionist view (Dawkins’ 1976 view) can also be said to apply what Okasha (2018) terms “agential thinking”. “Agential thinking” is “treating an evolved organism as if it were an agent pursuing a goal, such as survival or reproduction, and treating its phenotypic traits, including its behaviour, as strategies for achieving that goal, or furthering its biological interests” (Okasha, 2018: 12). Dawkins—and other gene selectionists—treat genes as if they have agency, speaking of “intra-genomic conflict”, as if genes are competing with each other (sure, it’s “just a metaphor”, see below).

Okasha (2018: 71) writes:

To see how this distinction relates to agential thinking, note that every gene is necessarily playing a zero-sum game against other alleles at the same locus: it can only spread in the population if they decline. Therefore every gene, including outlaws, can be thought of as ‘trying’ to outcompete alleles, or having this as its ultimate goal.

Selfish genes also have intermediate goals, which are to maximize fitness, which is done through expression in the organismic phenotype.

Thus, according to Okasha (2018: 73), “… selfish genetic elements have phenotypic effects which can be regarded as adaptations, but only if we apply the notions of agent, benefit, and goal to genes themselves”, though “… only in an evolutionary context [does] it [make] sense to treat genes as agent-like and credit them with goals and interests.” It does not “make sense to treat genes as even “agent-like and credit them with goals and interests since they can only be attributed to humans.

Other genes have as their intermediate goal to enhance the fitness of their host organism’s relatives, by causing altruistic behaviour [genes can’t cause altruistic behavior; it is an action]. However, a small handful of genes have a different intermediate goal, namely to increase their own transmission in their host organism’s gametes, for example, by biasing segregation in their favour, or distorting the sex-ratio, or transposing to new sites in the genome. These are outlaws, or selfish genetic elements.If oulaws are absent or are effectively suppressed, then the genes within a single organism have a common (intermediate) goal, so will cooperate: each gene can onluy benefit by itself by benefiting the whole organism. Agential thinking then can be applied to the organism itself. The organism’s goal—maximizing its fitness—then equates to the intermediate goal of each of the genes within it. (Okasha, 2018: 72)

Attributing agential thinking to anything other than humans is erroneous, since genes are not “selfish.”

The selfish gene is one of the main theories that define the neo-Darwinian paradigm and it is flat out wrong. Genes are not ultimate causes, as the crafters of the neo-Darwinian Modern Synthesis (MS) propose, genes are resources in a dynamic system and can thusly only be seen as causes in a passive, not active, sense (Noble, 2011).

Developmental systems

The alternative to the gene-centric view of evolution is that of developmental systems theory (DST), first proposed by Oyama (1985).

The argument for DST is simple:

(1) Organisms obviously inherit more than DNA from their parents. Since organisms can behave in ways that alter the environment, environments are also passed onto offspring. Thus, it can be said that genes are not the only things inherited, but a whole developmental matrix is.

(2) Genes, according to the orthodox view of the MS, interact with many other factors for development to occur, and so genes are not the only thing that help ‘build’ the organism. Genes can still play some “privileged” role in development, in that they “control”, “direct” or “organize” everything else, but this is up to gene-selectionists to prove. (See Noble, 2012.)

(3) The common claim that genes contain “information” (that is, context-independent information) is untenable, since every reconstruction of genes contain development about information applies directly to all other developmental outcomes. Genes cannot be singled out as privileged causes in development.

(4) Other attempts—such as genes are copied more “directly—are mistaken, since they draw a distinction between development and other factors but fail.

(5) Genes, then, cannot be privileged in development, and are no different than any other developmental factor. Genes, in fact, are just passive templates for protein construction, waiting to be used by the system in a context-dependent fashion (see Moore, 2002; Schneider, 2007). The entire developmental system reconstructs itself “through numerous independent causal pathways” (Sterelny and Griffiths, 1999: 109).

DNA is not the only thing inherited, and the so-called “famed immortality of DNA is actually a property of cells [since] [o]nly cells have the machinery to correct frequent faults that occur in DNA replication.” The thing about replication, though, is that “DNA and the cell must replicate together” (Noble, 2017: 238). A whole slew of developmental tools are inherited and that is what constructs the organism; organisms are, quite obviously, constructed not by genes alone.

Developmental systems, as described by Oyama (1985: 49) do not “have a final form, encoded before its starting point and realized at maturity. It has, if one focuses finely enough, as many forms as time has segments.Oyama (1985: 61) further writes that “The function of the gene or any other influence can be understood only in relation to the system in which they are involved. The biological relevance or any influence, and therefore the very “information” it conveys, is jointly determined, frequently in a statistically interactive, not additive, manner, by that influence and the system state it influences.

DNA is, of course, important. For without it, there would be nothing for the cell to read (recall how the genome is an organ of the cell) and so no development would occur. DNA is only “information” about an organism only in the process of cellular functioning.

The simple fact of the matter is this: the development of organs and tissues are not directly “controlled” by genes, but by the exchange signals of the cells. “Details notwithstanding, what is important to note is that whatever kinds of signals it sends out depends on the kind of signals it receives from its immediate environment. Therefore, neighboring cells are interdependent, and its local interactions among cells that drive the developmental processes” (Kampourakis, 2017: 173).

The fact of the matter is that whether or not a trait is realized depends on the developmental processes (and the physiologic system itself) and the environment. Kampourakis, just like Noble (2006, 2012, 2017) pushes a holistic view of development and the system. Kampourakis (2017: 184) writes:

What genetics research consistently shows is that biological phenomena should be approached holistically. at various levels. For example, as genes are expressed and produce proteins, and some of these proteins regulate or affect gene expression, there is absolutely no reason to privilege genes over proteins. This is why it is important to consider developmental processes in order to undertand how characters and disease arise. Genes cannot be considered alone but only in the broader context (cellular, organismal, environmental) in which they exist. And both characters and disease in fact develop; they are not just produced. Therefore, reductionism, the idea that genes provide the ultimate explanation for characters and disease, is also wrong. In order to understand such phenomena, we need to consider influence at various levels of organization, both bottom-up and top-down. This is why current research has adopted a systems biology approach (see Noble, 2006; Voit, 2016 for accessible introductions).

All this shows that developmental processes and interactions play a major role in shaping characters. Organisms can respond to changing environments through changes in their development and eventually their phenotypes. Most interestingly, plastic responses of this kind can become stable and inherited by their offspring. Therefore, genes do not predetermine phenotypes; genes are implicated in the development of phenotypes only through their products, which depends on what else is going on within and outside cells (Jablonka, 2013). It is therefore necessary to replacr the common representation of gene function presented in Figure 9.6a, which we usually find in the public sphere, with others that consider development, such as the one in figure 9.6b. Genes do not determine characters, but they are implicated in their development. Genes are resources that provide cells with a generative plan about the development of the organism, and have a major role in this process through their products. This plan is the resouce for the production of robust developmental outcomes that are at the same time plastic enough to accomodate changes stemming from environmental signals.

genedevelop1

Figure 9.6 (a) The common representation of gene function: a single gene determines a single phenotype. It should be clear by what has been present in the book so far that is not accurate. (b) A more accurate representation of gene function that takes development and environment into account. In this case, a phenotype is propduced in a particular environment by developmental processes in which genes are implicated. In a different environment the same genes might contribute tothe development of a different phenotype. Note the “black box” of development.

[Kampourakis also writes on page 188, note 3]

In the original analogy, Wolpert (2011, p. 11) actually uses the term “program.” However, I consider the term “plan” as more accurate and thus more appropriate. In my view, the term “program” impies instructions and their implimentation, whereas the term “plan” is about instructions only. The notion of a genetic program can be very misleading because it implies that, if it were technically feasible, it would be possible to compute an organism by reading the DNA sequence alone (see Lewontin, 2000, pp. 17-18).

Kampourakis is obviously speaking of a “plan” in a context-dependent manner since that is the only way that genes/DNA contain “information” (Moore, 2002; Schneider, 2007). The whole point is that genes, to use Noble’s terminology, are “slaves” to the system, since they are used by and for the (physiological) system. Developmental systems theory is a “wholeheartedly epigenetic approach to development, inheritance and evolution” (Hochman and Griffiths, 2015).

This point is driven home by Richardson (2017:111):

And how did genes eventually become established? Probably not at all as the original recipes, designers, and controllers of life. Instead they arose as templates for molecular components used repeatedly in the life of the cell and the organism: a kind of facility for just-in-time production of parts needed on a recurring basis. Over time, of course, the role of these parts themselves evolved to become key players in the metabolism of the call—but as part of a team, not the boss.

[…]

It is not surprising, then, that we find that variation in form and function has, for most traits, only a tenuous relationship with variation in genes.

[And also writes on page 133]:

There is no direct command line between environments and genes or between genes and phenotypes. Predictions and decisions about form and variation are made through a highly evolved dynamical system. That is why ostensibly the same environment, such as hormonal signal, can initiate a variaety of responses like growth, cell division, differentiation, and migration, depending on deeper context. This reflects more than fixes responses from fixed information in genes, something fatally overlooked in the nature-nurture debate

(Also read Richardson’s article So what is a gene?)

Conclusion

The gene-selectionist point-of-view entails too many (false) assumptions. The DST point of view, on the other hand, does not fall prey to the pitfalls of the gene-selectionist POV; Developmental systems theorists look at the gene, not as the ultimate causes of development—and, along with that, only changes in gene frequency driving evolutionary change—but only as products to be used by and for the system. Genes can only be looked at in terms of development, and in no other way (Kamporuakis, 2017; Noble, 2017). Thus, the gene-selectionists are wrong; the main tenet of the neo-Darwinian Modern Synthesis, gene-selectionism—the selfish gene—has been refuted (Jablonka and Lamb, 2005; Noble, 2006, 2011). The main tenets of the neo-Darwinian Modern Synthesis have been refuted, and so it is now time to replace the Modern Synthesis with a new view of evolution: one that includes the role of genes and development and the role of epigenetics on the developmental system. The gene-selectionist view champions an untenable view of the gene: that the gene is priviliged above any other developmental variables, but Noble and Kampourakis show that this is not the case, since DNA is inherited with the cell; the cell is what is “immortal” to use the language of Dawkins—not DNA itself.

A priori, there is no privileged level of causation, and this includes the gene, which so many place at the top of the hierarchy (Noble, 2012).

What Is the “Human Diet”?

3000 words

Is there one (or, one with slight modifications) diet that all humans should be eating? I’m skeptical of such claims. Though both vegans (one who does not eat or use animal products) and carnivores (one who eats only animal products), in my opinion, have some warped views on diet and human evolution. Both are extreme views; both have wrong ideas about diet throughout our evolution; both get some things right. Though, both are extreme views with little to no support. While it is hard to pinpoint what the “human diet” is, clearly, there were certain things that we ate through our evolutionary niches in our ancestral Africa that we “should” be eating today (in good quantities).

Although it is difficult to reconstruct the diet of early hominids due to lack of specimens (Luca, Perry, and Rienzo, 2014), by studying the eating behavior of our closest evolutionary relatives—the chimpanzees—we can get an idea of what our LCA ate and its eating behavior (Ulijaszek, Mann, and Elton, 2013). Humans have been throughout most every niche we could possibly been in and, therefore, have come across the most common foods in each ecology. If animal A is in ecosystem E with foods X, Y, and Z, then animal A eats foods X, Y, and Z, since animals consume what is in their ecosystem. Knowing this much, the niches our ancestors lived in in the past had to have a mix of both game and plants, therefore that was our diet (in differing amounts, obviously). But it is more complicated than that.

So, knowing this, according to Ulijaszek, Mann, and Elton, (2013: 35)Mammalian comparisons may be more useful than ‘Stone Age’ perspectives, as many of the attributes of hominin diets and the behaviour associated with obtaining them were probably established well before the Pleistocene, the time stone agers were around (Foley 1995; Ulijaszek 2002; Elton 2008a).” Humans eat monocots (various flowering plants with one seed), which is not common our order. The advent of farming was us “expanding our dietary niche”, which began “the widespread adoption of agriculture [which] is an obvious point of transition to a ‘monocot world’” (Ulijaszek, Mann, and Elton, 2013). Although these foodstuffs dominate our diet, there is seasonality in what types of those foods we consume.

So since humans tend to not pick at things to eat, but have discrete meals (it is worth noting that one should have “three square meals a day” is a myth; see Mattson et al, 2014), we need to eat a lot in the times we do eat. Therefore, since we are large-bodied primates and our energy needs are much higher (due to our large brains that consume 20 percent of our daily caloric consumption), we need higher quality energy. The overall quality and energy density of our diets are due to meat-eating—which folivorous/frugivorous primates do not consume. We have a shorter gut tract which is “often attributed to the greater reliance of faunivory in humans“, though “humans are not confined to ‘browse’ vegetation … and make extensive use of grasses and their animal consumers” (Ulijaszek, Mann, and Elton, 2013: 58). Due to this, we show amazing dietary flexibility and adaptability due to our ability to eat a wide range of foodstuffs in most any environment we find ourselves in.

So “It is difficult to pinpoint what the human diet actually is … Nonetheless, humans are frequently described as omnivores” (Ulijaszek, Mann, and Elton, 2013: 59). Omnivores normally feed at two or more trophic levels, though others define it as just consuming plants and animals (Chubaty et al, 2014). Trophic level one is taken up by plants; level two is taken up by herbivores—primary consumers; level three is taken up by predators—who feed on the herbivores; level four or five is taken up by apex predators or carnivores; while the last level is also taken up by detrivores—those who feed on waste. Though, of course, “omnivory” is a continuum and not a category in and of itself. Humans eat primary producers (plants) and primary consumers (herbivores) and some secondary consumers (like fish), “although human omnivory may only be possible because of technological processing” (Ulijaszek, Mann, and Elton, 2013: 59). Other animals described as “omnivorous” eat only foods from one trophic level and only consume food from another level when needed.

Humans—as a species—rely on meat consumption. Fonseca-Azevedo and Herculano-Houzel (2012) showed that the energetic cost of a brain is directly related to the number of neurons in the brain. So, there were metabolic limitations in regard to brain and body size. The number of hours available to feed along with the low caloric yield of plant foods explains why great apes have such large bodies and small brains—which was probably overcome by erectus, who probably started cooking food around 1.5 mya. If we consumed only a diet of raw foods, then it would have taken us around 9 h/day to consume the calories we would need to power our brains—which is just not feasible. So it is unlikely that erectus—who was the first to have the human body plan and therefore the ability to run, which implies he would have needed higher quality energy—would have survived on a diet of raw plant foods since it would take so many hours to consume enough food to power their growing brains.

We can see that we are adapted to eating meat by looking at our intestines. Our small intestines are relatively long, whereas our long intestines are relatively short, which indicates that we became adapted to eating meat. Our “ability to eat significant quantities of meat and fish is a significant departure from the dietary norm of the haplorhine primates, especially for animals in the larger size classes.” Though “Humans share many features of their gut morphology with other primates, particularly the great apes, and have a gut structure that reflects their evolutionary heritage as plant, specifically ripe fruit, eaters” (Ulijaszek, Mann, and Elton, 2013: 63). Chimpanzees are not physiologically adapted to meat eating, which can be seen in the development of hypercholesterolemia along with vascular disease, even when controlled diets in captivity (Ford and Stanford, 2004).

When consuming a lot of protein, though, “rabbit starvation” needs to be kept in mind. Rabbit starvation is a type of malnutrition that arises from eating little to no fat and high amounts of protein. Since protein intake is physiologically demanding (it takes the most energy to process out of the three macros), Ben-Dor et al (2011) suggest a caloric ceiling of about 35 percent of kcal coming from protein. So erectus’ protein ceiling was 3.9 g/bw per day whereas for Homo sapiens it was 4.0 g/bw per day. Ben-Dor et al (2011) show that erectus’ DEE (daily energy expenditure) was about 2704 kcal, with “a maximum long-term plant protein ceiling of 1014 calories“, implying that erectus was, indeed, an omnivore. So, of course, the consumption of protein and raw plants are physiologically limited. Since erectus’ ceiling on protein intake was 947 kcal and his ceiling on raw plant intake was 1014 kcal, then, according to the model proposed by Ben-Dor et al (2011), erectus would have needed to consume about 744 kcal from fat, which is about 27 percent of his overall caloric intake and 44 percent of animal product intake.

Neanderthals would have consumed between 74-85 percent of their daily caloric energy during glacial winters from fat, with the rest coming from protein (Ben-Dor, Gopher, and Barkai, 2016), while consuming between 3,360 to 4,480 kcal per day (Steegman, Cerny, and Holliday, 2002). (See more on Neanderthal diet here.) Neanderthals consumed a large amount of protein, about 292 grams per day (Ben-Dor, Gopher, and Barkai, 2016: 370). Since our close evolutionary cousins (Neanderthals and erectus) ate large amounts of protein and fat, they were well-acclimated, physiologically speaking, to their high-protein diets. Though, their diets were not too high in protein to where rabbit starvation would occur—fat was consumed in sufficient amounts in the animals that Neanderthals hunted and killed, so rabbit starvation was not a problem for them. But since rabbit starvation is a huge problem for our species, “It is therefore unlikely that humans could be true carnivores in the way felids are” (Ulijaszek, Mann, and Elton, 2013: 66).

We consume a diet that is both omnivorous and eclectic, which is determined by our phylogeny through the form of our guts; we have nutritional diversity in our evolutionary history. We needed to colonize new lands and, since animals can only consume what is in their ecosystem, the foods that are edible in said ecosystem will be what is consumed by that animal. Being eclectic feeders made the migration out of Africa possible.

But humans are not true carnivores, contrary to some claims. “Meat-eating has allowed humans to colonize high latitudes and very open landscapes. However, bearing in mind the phylogenetic constraints that prevent humans from being true carnivores, such expansion was probably not accomplished through meat-eating alone. Instead, humans have used their ability to technologically harvest, produce, and consume a very wide range of foods to help exploit all major biomes” (Ulijaszek, Mann, and Elton, 2013: 67).

Humans, though, lack the gut specialization and dentition to process grasses efficiently. This means that our ancestors ate animals that ate these things, and therefore the C4 they consumed elevated the levels in the fossils we discovered. Information like this implies that our ancestors ate across a wide variety of trophic levels and had substantial dietary diversity throughout evolutionary history.

Hominins lack the specialized dentition found in carnivorans (the group of animals that includes the cat and dog families) and other habitual meat and bone eaters, so must have pre-processed at least some of the meat in their diet” (Ulijaszek, Mann, and Elton, 2013: 81). This is where stone tools come into play (Zink and Lieberman, 2016). “Processing” food can be anything from taking out nutrients to changing how the food looks. We can look at “food processing” as a form of pre-digestion before consumption. The use of stone tools, and cooking, was imperative for us to begin the processing of meat and other foods. This gave us the ability to “pre-digest” our food before consumption, which increases the available energy in any food that is cooked/processed. For example, cooking denatures protein strands and breaks down the cell walls which gelatinizes the collagen in the meat which allows for easier chewing and digestion. Carmody et al (2016) showed that adaptation to a cooked diet began around 275 kya.

In his book Catching Fire, Wrangham (2009: 17-18) writes:

Raw-foodists are dedicated to eating 100 percent of their diets raw, or as close to 100 percent as they can manage. There are only three studies of their body weight, and all find that people who eat raw tend to be thin. The most extensive is the Giessen Raw Food study, conducted by nutritionist Corinna Koebnick and her colleagues in Germany, who used questionnaires to study 513 raw-foodists who ate from 70 to 100 percent of their diet raw. They chose to eat raw to be healthy, to prevent illness, to have a long life, or to live naturally. Raw food included not only uncooked vegetables and occasional meat, but also cold-pressed oil and honey, and some items were lightly heated such as dried fruits, dried meat, and dried fish. Body mass index (BMI), which measures weight in relation to the square of the height, was used as a measure of fatness. As the proportion of food eaten raw rose, BMI fell. The average weight loss when shifting from a cooked to a raw food diet was 26.5 pounds (12 kilograms) for women and 21.8 pounds (9.9 kilograms) for men. Among those eating a purely raw diet (31 percent), the body weights of almost a third indicated chronic energy deficiency. The scientists’ conclusion was unambiguous: “a strict raw food diet cannot guarantee an adequate energy supply.”

Also, vegetarians and meat-eaters who cooked their food have similar body weights. This implies that cooking food—no matter the type—gives more caloric energy to use for the body and that raw-foodists are fighting a losing battle with biology, consuming raw foods at such a high quantity that our guts are not used for. As can be seen above in the citation from Fonseca-Azevedo and Herculano-Houzel (2012), great apes who eat nothing but raw food have large guts and bodies which are needed to consume the raw plant foods they eat but we cannot thrive on such a diet because it is not calorically nor nutritionally viable for us—most importantly due to the size of our brains and its caloric requirements.

Carmody, Weintraub, and Wrangham (2011) show that modern raw-foodists who subsist on raw meat and plants have nutrient deficiencies and chronic energy deficiencies, even though they process their foods (cooking is a form of processing, as is cutting, mashing, pounding, etc) in different manners, while females experience low fecundity. Thus, the cooking of food seems to be needed for normal biological functioning; we have clearly evolved past consuming all raw foods. So it is clear that cooking—along with meat-eating—was imperative to our evolution. (Which does not mean that humans only ate meat and that eating meat and only meat is part of our evolutionary history.) Cooking food lead to it gelatinizing which denatured the protein, leading to easier mastication of the food, which meant less force since the food was not as hard after cooking. This then led to smaller teeth, over time, which was seen in erectus (Zink and Lieberman, 2016). This was due to cooking along with tool-use: the tool-use lead to smaller particles leading to less force per bite, which eventually led to smaller teeth in our lineage.

Finally, humans are said to be “facultative carnivores.” A facultative carnivore is an animal that does best on a carnivorous diet but can survive—not thrive—on other foodstuffs when meat is not available. This, though, doesn’t make sense. Humans are eclectic feeders—omnivorous in nature. Yes, we began cooking about 1.5 mya; yes meat-eating (and the cooking of said meat) is huge in the evolution of our species; yes without meat and cooking we would not have had the energy requirements to split off from chimpanzees/great apes. But this does not mean that we do “best” on a carnivorous diet. There are about 7,105 ethnic groups in the world (Spencer, 2014: 1029), and so to say that all of these ethnies would do the same or similar, physiologically speaking, on an all-meat diet is crazy talk. The claims that we subsisted on one type of food over the other throughout our evolutionary history is a bold claim—with no basis in evolutionary history.

Marlene Zuk (2013: 103-104), author of Paleofantasy writes:

Another implication of the importance Marlowe attaches to bow hunting is that, rather than starting out as exclusively carnivorous and then adding starches and other plant material to the diet, ancient humans have been able to increase the proportion of meat only after newer technology had come about, a mere 30,000 years ago. Other anthropologists concur that the amount of meat in the human diet grew as we diverged from our other primate ancestors. All of this means that, first, contrary to the claims of many paleo-diet proponents, the earliest humans did not have an exclusively meat-based diet that we are best adapted to eat; and second, our ancestors’ diets clearly changed dramatically and repeatedly over the last tens, not to mention hundreds, thousands of years, even before the advent of agriculture.

The assumption that we were fully (or even mostly) carnivorous and then added plant foods/carbs is clearly false. “Fantasies” like this are “just-so stories”; they are nice-sounding stories, but reality is clearly more nuanced than people’s evolutionary and Stone Age imaginations. This makes sense, though. Since we evolved from an LCA (last common ancestor) with chimpanzees some 6.3 mya (Patterson et al, 2006). So why would it make sense that we would then, ultimately, only subsist on an all-meat diet, if our LCA with chimpanzees was most likely a forager who lived in the trees (Lieberman, 2013).

One thing, though, I’m sure that everyone agrees with is that the environments we have constructed for ourselves in the first world are maladaptive—what is termed an “evolutionary mismatch” (Lieberman, 2013; Genne-Bacon, 2014). The mismatch arises from the high-carb food environments we have constructed, with cheap foodstuffs that is loaded with sugar, salt, and fat which is much more addictive than on their own (see Kessler, 2010). This makes food more palatable and people then want to eat it more. Foods like this, obviously, were not in our OEE (original evolutionary environment), and therefore cause us huge problems in our modern-day environments. Evolutionary mismatches occur when technological advancement increases faster than the genome can adapt. This can clearly be seen in our societies and the explosion of obesity over the past few decades (Fung, 2016, 2018).

We did not evolve eating highly processed carbohydrates loaded with salt and sugar. That much everyone can agree on.

Conclusion

It is clear that both claims from vegans/vegetarians and carnivores are false: there is no one “human diet” that we “should” be eating. Individual variation in different physiologic processes implies that there is no one “human diet”, no matter what type of food is being pushed as “what we should be” eating. Humans are eclectic feeders; we will eat anything since “Humans show remarkable dietary flexibility and adaptability“. Furthermore, we also “have a relatively unspecialized gut, with a colon that is shorter relative to overall size than in other apes; this is often attributed to the greater reliance on faunivory in humans (Chivers and Langer 1994)” (Ulijaszek, Mann, and Elton, 2013: 58). Our dietary eclectism can be traced back to our Australopithecine ancestors. The claim that we are either “vegetarian/vegan or carnivore” throughout our evolution is false.

Humans aren’t “natural carnivores” or “natural vegans/vegetarians.” Humans are eclectic feeders. Animals eat whatever is in their ecosystem. Ergo humans are omnivores, though we can’t pinpoint what the “human diet” is since there is great variability in it due to culture/ecology, we know one thing: we did not subsist on mainly only one food; we had a large variety of food, especially with fallback foods, to consume throughout our evolutionary history. So claims that we evolved to eat a certain way (as vegans/vegetarians and carnivores claim) is false. (Note I am not saying that high carb diets are good; I’ve railed hard on them.)

Just-so Stories: FOXP2

1200 words

FOXP2 is a so-called “gene for” language. The gene is a transcription factor—meaning that it controls the activity of other genes. Thus, changes to FOXP2 will have changes to other genes as well. Thus, the evolution of language in humans was thought to have hinged on mutations on the FOXP2 gene. Humans that have a single-point mutation in FOXP2 “have impaired speech and grammer, but not impaired language comprehension” (Mason, et al, 2018: 403). This gene is found in numerous mammals (e.g., chimpanzees, gorillas, orangutans, rhesus macaques, and mice) but none of those mammals speak. This gene, then, is expressed in the areas of the brain that affects motor functioning, which includes the coordination needed to create words.

Mice and humans at the FOXP2 gene only differ by 3 amino acids. Only one amino acid difference exists between gorillas, chimps, mice, and macaques, who all have identical amino acid sequences on FOXP2. Furthermore, two more amino acid sequences differ between humans and the sequences which is shared by chimpanzees, gorillas, and macaques. Thus, the difference of two amino acids between humans and other primates appears to have made it possible for language to evolve. Evidence exists for strong selective pressures for the two FOXP2 mutations which allow the brain, larynx, and mouth to coordinate to produce speech. These two altered amino acids may change the ability of FOXP2 transcription factor to be phosphorylated—proteins are either activated by phosphorylation or deactivated by dephosphorylation, or the reverse.

Mason et al (2018: 403) write:

Comparative genomics efforts are now extending beyond primates. A role for FOXP2 in songbird singing and vocal learning has been proposed. Mice communicate via squeaks, with lost young mice emitting high-pitched squeaks, FOXP2 mutations leave mice squeakless. For mice and songbirds, it is a stretch to claim that FOXP2 is a language gene—but it is likely needed in the neuromuscular pathway to make sounds.

FOXp2

Above is Figure 18.17 from Mason et al (2018: 403). They write:

Comparisons  of synonymous and nonsynonymous changes in mouse and primate FOXP2 genes indicate that changing two amino acids in the gene corresponds to the emergence of human language. Black bars represent synonymous changes; gray bars represent nonsynymous changes.

But is that the whole story? Is FOXP2 really a “gene for” language? New results call this hypothesis into question.

In their paper No Evidence for Recent Selection at FOXP2 among Diverse Human Populations, Atkinson et al (2018) did not find evidence for recent positive or balancing selection. Atksinson et al (2018) conclude that they:

do not find evidence that the FOXP2 locus or any previously implicated site within FOXP2 is associated with recent positive selection in humans. Specifically, we demonstrate that there is no evidence that the original two amino-acid substitutions were targeted by a recent sweep limited to modern humans <200 kya as suggested by Enard et al. (2002) … Any modified function of the ROI does not appear to be related to language, however, as modern southern African populations tolerate high minor allele frequencies with no apparent consequences to language faculty. We do not dispute the extensive functional evidence supporting FOXP2’s important role in the neurological processes related to language production (Lai et al., 2001, MacDermot et al., 2005, Torres-Ruiz et al., 2016). However, we show that recent natural selection in the ancestral Homo sapiens population cannot be attributed to the FOXP2 locus and thus Homo sapiens’ development of spoken language.

So the two mutations in exon 7 of FOXP2 weren’t selected and are not responsible for human language. Most likely the accelerated rate is due to loss of function (LoF) (null allele).

The gene was originally discovered in a family that had a history of speech and language disorders (Lai et al, 2001). This “speech gene” was also found in Neanderthals in 2007 (see Krasue et al, 2007). Thus, the modifications to FOXP2 occurred before humans and Neanderthals diverged.

So Atkinson et al (2018) found that the so-called sweep on FOXP2 >200KYA was a statistical artifact which was caused by lumping Africans together Caucasians and other populations. Of course, language is complicated and no one single gene will explain the emergence of human language.

This is a just-so story—that is, an ad hoc hypothesis. Humans had X, others didn’t have X or had a different form of X; therefore X explains human language faculties.

Atkinson et al’s (2018)results represent a substantial revision to the adaptive history of FOXP2, a gene regarded as vital to human evolution.

High evolutionary constraint among taxa but variability within Homo sapiens is compatible with a modified functional role for this locus in humans, such as a recent loss of function.

Therefore, this SNP must not be necessary for language function as both alleles persist at high frequency in modern human populations. Though perhaps obvious, it is important to note that there is no evidence of differences in language ability across human populations. (Atkinson et al, 2018)

This is another just-so story (Gould and Lewontin, 1976Lloyd, 1999Richardson, 2007; Nielsen, 2009) that seems to have bitten the dust. Of course, the functionality of FOXP2 and its role in the neurologic processes related to language; what is disputed (and refuted) is the selectionist just-so story. Selectionist explanations are necessarily ad-hoc. Thus, recent natural selection in our species cannot be attributed to FOXP2, and along with it, our language capabilities.

There is a similar objection, not for FOXP2 and selectionist hypotheses, but for the Lactase gene. Nielsen (2009) puts it succinctly:

The difference in lactose intolerance among human geographic groups, is caused by a difference in allele frequencies in and around the lactase gene (Harvey et al. 1998; Hollox et al. 2001; Enattah et al. 2002; Poulter et al. 2003). … This argument is not erected to dispute the adaptive story regarding the lactase gene, the total evidence in favor of adaptation and selection related to lactose tolerance is overwhelming in this case, but rather to argue that the combination of a functional effect and selection does not demonstrate that selection acted on the specific trait in question. … Although the presence of selection acting on genes underlying a phenotypic trait of interest does help support adaptive stories, it does not establish that selection acted directly on the specific trait of interest.

Even if there were evidence of positive selection of FOXP2 in humans, we cannot logically state that selection acted on the FOXP2 locus; functional effects and selection do not demonstrate that “selection” acted on that trait. Just-so stories (ad hoc hypotheses) “sound good”, but that’s only because they are necessarily true—one can have all the data they want, then they can think up any adaptive story to explain the data and the story will be necessarily true. Therefore, selectionist hypotheses are inherently ad hoc.

In conclusion, another selectionist hypothesis bites the dust. Nevermind the fact that, if FOXP2 were supposedly “selected-for”, there would still be the problem of free-riders (Fodor and Piattelli-Palmarini, 2010). That is, “selection” cannot “select-for” fitness-enhancing traits if/when they are coextensive with other traits—there is no way for selection to distinguish between coextensive traits and thus, it does not explain trait fixation (in this case, the fixation of FOXP2). Ad-hoc hypotheses are necessarily true—that is, they explain the data they purport to explain and only the data they purport to explain. These new results show that there is no support for positive selection at the FOXP2 locus.

Natural Selection is not an Explanatory Mechanism

2450 words

Darwin proposed, back in 1859, that species arose due to natural selection—the pruning of deleterious genetic variations in a population, which led to the thinking that the “inherent design” in nature, formerly thought to be due to a designer (“God”) was due to a force Darwin called “natural selection” (NS). The line of reasoning is thus: (1) two individuals of the same population are mostly the same genetically/phenotypically, but have small differences between them, and one of the small differences is a difference in a trait needed for survival. (2) But both traits can contribute to fitness, how does NS ‘know’ to select for either coextensive trait? Now think about two traits: trait T and trait T’. What would explain the fixation of either trait in the population we are discussing? NS is not—cannot—be the mechanism of evolution.

In 2010, philosopher Jerry Fodor and cognitive scientist Massimo Piattelli-Palmarini, wrote a book titled “What Darwin Got Wrong“, which argued that NS is not a causal mechanism in regard to the formation of new species. Their argument is (pg 114):

  1. Selection-for is a causal process.
  2. Actual causal relations aren’t sensitive to counterfactual states of affairs: if it wasn’t the case that A, then the fact that it’s being A would have caused its being B doesn’t explain its being the case that B.
  3. But the distinction between traits that are selected-for and their free-riders turns on the truth (or falsity) of relevant counterfactuals.
  4. So if T and T’ are coextensive, selection cannot distinguish the case in which T free-rides on T’ from the case that T’ free-rides on T.
  5. So the claim that selection is the mechanism of evolution cannot be true.

This argument is incredibly strong. If it is true, then NS cannot be the mechanism by which evolution occurs; NS is not—nor can it be—the mechanism of evolution. So, regarding the case of two traits that are coextensive with each other, it’s not possible to ascertain which trait was selected-for and which trait was the free-rider. NS cannot distinguish between two locally coextensive traits, so, therefore, it is not an explanatory mechanism and does not explain the evolution of species, contra Darwin. It cannot be the mechanism that connects phenotypic variation with fitness variation.

The general adaptationist argument is: “(1) the claim that evolution is a process in which creatures with adaptive traits are selected and (2) the claim that evolution is a process in which creatures are selected for their adaptive traits” (Fodor and Piattelli-Palmarini, 2010: 13). Darwinists are committed to inferring (2) from (1), though it is fallacious. It is known as the intensional fallacy.

Due to the intensionality of “select-for” and “trait”, one cannot infer from ‘Xs have trait t and Xs were selected’ to ‘Xs were selected for having trait t’” (Fodor and Piattelli-Palmarini, 2010: 139). How does one distinguish from a trait that was selected-for and a free-rider that hitched a ride on the truly adaptive trait for the organism in question? The argument provided above shows that it is not possible. “Darwinists have a crux about free-riding because they haven’t noticed the intensionality of selection-for and the like; and when it is brought to their attention, they haven’t the slightest idea what to do about it” (Fodor and Piattelli-Palmarini, 2010: 16).

No observation can show whether or not trait T or T’ was selected-for in virtue of its contribution to fitness in a given population; favoring one story over another in regard to the adaptation of a trait in question, therefore, does not make any logical sense due to the problem of free-riders (and, also, favoring one story over another is due to bias for the like of the specific adaptive just-so story in question). For if two traits are coextensive—meaning that traits coincide with one another—then how can NS—which does not have a mind—‘know’ to “select-for” whichever trait contributes to fitness in the population in question? Breeders are the perfect example.

Breeders have minds and can therefore select for certain traits and against undesirable traits; however, of course, since NS does not have a mind, this is not the case when it comes to naturally selected traits (so-called), since NS does not have a mind. NS cannot explain the distribution of phenotypic traits throughout the world; there is no agent of NS nor are there ‘laws of selection’, therefore NS is not an explanatory mechanism. Explanations based on NS are based only on correlations with traits and fitness, not on causes themselves (this critique can be extended to numerous other fields, too). The problem with relying only on correlations between traits and fitness is two-fold: (1) the trait in question can be irrelevant to fitness and (2) the trait in question can be a free-rider.

Creatures have traits that increase fitness because they were selected-for, the story goes. NS explains why the creature in question has trait T, which increases fitness in environment E. One can then also make the claim that the selection of the trait in question was due to the increased fitness it gave the creature. However, if this claim is made, “then the theory of natural selection would reduce to a trait’s being a cause of reproductive success [which then] explains its being a cause of reproductive success which explains nothing (and isn’t true).

So since genetically-linked traits are coextensive with an infinitude of different possible outcomes, then the hypothesis that trait X is an adaptation is underdetermined by all possible observations, which means that NS cannot explain how and why organisms have the traits they do, since NS cannot distinguish between two coextensive traits, since NS lacks a mind and agency.

NS can be said to be an explanation if and only if two conditions are met: (1) if NS can be understood as acting on counterfactuals and (2) if NS can be said to be acting on any physical evolutionary laws.

(1) A counterfactual is an “if-clause”, which is contrary to a fact. A counterfactual is a statement that cannot be true, for example, “I hear but I have no ears” or “I see but I have no eyes.” Thus, if it were possible for NS to be an explanation for the continuance of a specific trait that is linked to other traits (that is, they are coextensive) in a given population, it would need to—necessarily—invoke a counterfactual about NS. It would need to be the case that the trait in question would still be selected for in the absence of free-riders. As an example from Fodor and Piattelli-Palmarini (2010: 103) a heart pumps blood (what it was selected-for) and makes pumping sounds (its linked free-rider). Thus, if the pumping of blood and the sound that blood-pumping makes were not coextensive, then the pumping, not the pumping sounds, get selected for.

There is a huge problem, though. Counterfactuals are intentional statements; they refer to concepts found in our minds, not any physical things. NS does not have a mind and thus lacks the ability to “select-for” since “selecting-for” is intentional. Therefore NS does not act on counterfactuals; it is blind to the fact of counterfactuals since it does not have a mind.

(2) It does not seem likely that there are “laws of selection”. Clearly, the adaptive value of any phenotype depends on the environment that the organism is in. Fodor and Piattelli-Palmarini (2010: 149) write (emphasis theirs):

The problem is that it’s unlikely that there are laws of selection. Suppose that P1 and P2 are coextensive but that, whereas the former is a property that affects fitness, the latter is merely a correlate of a property that does. The suggestion is that all this comes out right if the relation between P1 and fitness is lawful, and the relation between P2 and fitness is not. …it’s just not plausible that there are laws that relate phenotypic traits per se to fitness. What (if any) effect a trait has on fitness depends on what kind of phenotype is embedded in, and what ecology the creature that has the trait inhabits. This is to say that, if you wish to explain the effects that a phenotypic trait has on a creature’s fitness, what you need is not its history of selection but its natural history. And natural history offers not laws of selection but narrative accounts of causal chains that lead to the fixation of phenotypic traits. Although laws support counterfactuals, natural histories do not; and, as we’ve repeatedly remarked, it’s counterfactual support on which distinguishing the arches from the spandrels depends.

There is, too, a simple example regarding coextensive traits and selection. Think of the lactase gene. It is well-known that we humans are adapted to drink milk—and the cause is gene-culture coevolution that occurred at around the time of cow domestication (Beja-Perreira et al, 2003; Gerbalt et al, 2011). No one disputes the fact that gene-culture coevolution is how and why we can drink milk. But what people do dispute is the adaptive just-so story (Gould and Lewontin, 1976; Lloyd, 1999; Richardson, 2007) that was made to explain how and why the trait went to fixation in certain human populations. Nielsen (2009) writes (emphasis mine):

The difference in lactose intolerance among human geographic groups, is caused by a difference in allele frequencies in and around the lactase gene (Harvey et al. 1998; Hollox et al. 2001; Enattah et al. 2002; Poulter et al. 2003). The cause for the difference in allele frequencies is primarily natural selection emerging about the same time as dairy farming evolved culturally (Bersaglieri et al. 2004). Together, these observations lead to a compelling adaptive story of natural selection favoring alleles causing lactose tolerance. But even in this case we have not directly shown that the cause for the selection is differential survival due to an ability/inability to digest lactose. We must acknowledge that there could have been other factors, unknown to us, causing the selection acting on the region around the Lactase gene. Even if we can argue that selection acted on a specific mutation, and functionally that this mutation has a certain effect on the ability to digest lactose, we cannot, strictly speaking, exclude the possibility that selection acted on some other pleiotropic effect of the mutation. This argument is not erected to dispute the adaptive story regarding the lactase gene, the total evidence in favor of adaptation and selection related to lactose tolerance is overwhelming in this case, but rather to argue that the combination of a functional effect and selection does not demonstrate that selection acted on the specific trait in question.

Selection could have acted on a free-rider that is coextensive with the lactase gene, and just because “the story fits the data” well (that’s a necessary truth; of course the story can fit the data because any story can be formulated for any data) does not mean that it is true, that the reason for trait T is reason R since they “fit the data so well.”

Of course, this holds for EP, evolutionary anthropology, and my favorite theory for the evolution of human skin color, the vitamin D hypothesis. I do not, of course, deny that light skin is needed in order to synthesize vitamin D in climates with low UVB; that is a truism. What is denied is the fact that selection acted on light skin (and its associated/causal genes); what is denied is the combination of functional effect and selection. Just-so stories are necessarily true; they, of course, fit any data because one can formulate any story to fit any data points they have. Thus, Darwinists are just storytellers who have a bunch of data; there is no way to distinguish between the selection of a trait because it increased fitness and the selection of a free-rider that is “just there” that does not increase fitness, but the thing that increases fitness is what the free-rider “rode in on.”

NS is not and cannot be an explanatory mechanism. Darwinism has already been falsified (Jablonka and Lamb, 2005; Noble, 2011; Noble, 2012; Noble, 2017) and so, this is yet another nail-in-the-coffin for Darwinism. The fact that traits that are coextensive means that NS would have to “know” which trait to act on; NS cannot “know” which of the coextensive traits to act on (because it has no mind) and, NS cannot be a general mechanism that connects phenotypic variation to variation in fitness. NS does not explain the evolution of species, nor can NS distinguish between two locally coextensive traits—traits T and T’—because NS has no agency and does not have a mind. Therefore NS is not an explanatory mechanism. Just invoking NS to explain the continuance of any trait fails to explain the survival of the trait because NS cannot distinguish between traits that enhance an organism’s fitness and free-riders which are irrelevant to survival but are coextensive with the selected-for trait, as long as the traits in question are coextensive.

P1) If there is selection for T but not T’, various counterfactuals must be true.
P2) If the counterfactuals are true, then NS must be an intentional-agent, or there must be laws about “selection-for”.
P3) NS is mindless.
P4) There are no laws for “selection-for”.
∴ It is false that selection for T but not T’ occurs in a population.

One then has two choices:

(1) Argue that NS has a mind and therefore that it can “select for” certain traits that are adaptable in a given population of organisms in the environment in question. “Select-for” implies intention. Intentional acts only occur in organisms with minds. Intentional states are only possible if something has a mind. Humans are the only organisms with minds. Humans are the only organisms that can act intentionally. NS does not have a mind. (Animal breeder’s are an example that can select-for desirable traits and against undesirable traits because animals breeder’s are humans and humans can act intentionally.) Therefore NS does not act intentionally since it does not have a mind. I don’t think anyone would argue that NS has a mind and acts intentionally as an agent, therefore P3 is true.

(2) Argue that there are laws for “selection-for” phenotypic traits related to fitness. But it’s not possible that there are laws that relate to the selection of a phenotype, per se, in a given population. The effect of a trait depends on the ecology of the organism in question as well as its natural history. Therefore, to understand the effects of a phenotypic trait on the fitness of an organism we must understand its natural history, not its selection history (so-called). Therefore P4 is true.

There are no laws for “selection-for”, nor does NS have a mind that can select a trait that lends to an organism’s fitness and not a trait that’s just correlated with the trait in question

DNA is not a “Blueprint”

2200 words

Leading behavior geneticist Robert Plomin is publishing “Blueprint: How DNA Makes Us Who We Are” in October of 2018. I, of course, have not read the book yet. But if the main thesis of the book is that DNA is a “code”, “recipe”, or “blueprint”, then that is already wrong. This is because presuming that DNA is any of the three aforementioned things marries one to certain ideas, even if they themselves do not explicitly state them. Nevertheless, Robert Plomin is what one would term a “hereditarian”, meaning that he believes that genes—more than environment—shape an individual’s psychological and other traits. (That’s a false dichotomy, though.) In the preview for the book at MIT Press, they write:

In Blueprint, behavioral geneticist Robert Plomin describes how the DNA revolution has made DNA personal by giving us the power to predict our psychological strengths and weaknesses from birth. A century of genetic research shows that DNA differences inherited from our parents are the consistent life-long sources of our psychological individuality—the blueprint that makes us who we are. This, says Plomin, is a game-changer. It calls for a radical rethinking of what makes us who were are.

Genetics accounts for fifty percent of psychological differences—not just mental health and school achievement, but all psychological traits, from personality to intellectual abilities. Nature defeats nurture by a landslide.

Plomin explores the implications of this, drawing some provocative conclusions—among them that parenting styles don’t really affect children’s outcomes once genetics is taken into effect. Neither tiger mothers nor attachment parenting affects children’s ability to get into Harvard. After describing why DNA matters, Plomin explains what DNA does, offering readers a unique insider’s view of the exciting synergies that came from combining genetics and psychology.

I won’t get into most of these things today (I will wait until I read the book for that), but this will be just an article showing that DNA is, in fact, not a blueprint, and DNA is not a “code” or “recipe” for the organism.

It’s funny that the little blurb says that “Nature defeats nurture by a landslide“, because, as I have argued at length, nature vs nurture is a false dichotomy (See Oyama, 1985, 20001999Moore, 2002; Schneider, 2007; Moore, 2017). Nature vs nurture is the battleground that the false dichotomy of genes vs environment is fought on. However, it makes no sense to partition heritability estimates if it is indeed true that genes interact with environment—that is, if nature interacts with nurture.

DNA is also called “the book of life”. For example, in her book The Epigenetics Revolution: How Modern Biology Is Rewriting Our Understanding of Genetics, Disease, and Inheritance, Nessa  Carey writes that “There’s no debate that the DNA blueprint is a starting point” (pg 16). This, though, can be contested. “But the promise of a peep into the ‘book of life’ leading to a cure for all diseases was a mistake” (Noble, 2017: 161).

Developmental psychologist and cognitive scientist David S. Moore concurs. In his book The Developing Genome: An Introduction to Behavioral Epigenetics, he writes (pg 45):

So, although I will talk about genes repeatedly in this book, it is only because there is no other convenient way to communicate about contemporary ideas in molecular biology. And when I refer to gebe, I will be talking about a segment or segments of DNA containing sequence information that is used to help construct a protein (or some other product that performs a biological function). But it is worth remembering that contemporary biologists do not mean any one thing when they talk about “genes”; the gene remains a fundementally hypothetical concept to this day. The common belief that there are things inside of us that constitute a set of instructions for building bodies and minds—things that are analogous to “blueprings” or “recipes”—is undoubedtly false. Instead, DNA segements often contain information that is ambiguous, and that must be edited or arranged in context-dependent ways before it can be used.

Still, other may use terms like “genes for” trait T. This, too, is incorrect. In his outstanding book Making Sense of Genes, Kostas Kamporakis writes (pg 19):

I also explain why the notion of “genes for,” in the vernacular sense, is not only misleading but also entirely inaccurate and scientifcally illegitamate.

[…]

First, I show that genes “operate” in the context of development only. This means that genes are impllicated in the development of characters but do not determine them. Second, I explain why single genes do not alone produce characters or disease but contribute to their variation. This means that genes can account for variation in characters but cannot alone explain their origin. Third, I show that genes are not the masters of the game but are subject to complex regulatory processes.

Genes can only be seen as passive templates, not ultimate causes (Noble, 2011), and they cannot explain the origin of different characters but can account for variation in physical characters. Genes only “do” something in the context of development; they are inert molecules and thusly cannot “cause” anything on their own.

Genes are not ‘for’ traits, but they are difference-makers for traits. Sterelny and Griffiths (1999: 102), in their book Sex and Death: An Introduction to Philosophy of Biology write:

Sterelny and Griffiths (1988) responded to the idea that genes are invisible to selection by treating genes as difference makers, and as visible to selection by virtue of the differences they make. In doing so, they provided a formal reconstruction of the “gene for” locution. The details are complex, but the basic intent of the reconstruction is simple. A certain allele in humans is an “allele for brown eyes” because, in standard environments, having that allele rather than alternatives typically available in the population means that your eyes will be brown rather than blue. This is the concpet of a gene as a difference maker. It is very important to note, however, that genes are context-sensitive difference makers. Their effects depend on the genetic, cellular, and other features of their environment.

(Genes can be difference makers for physical traits, but not for psychological traits because no psychophysical laws exist, but I’ll get to that in the future.)

Note how the terms “context-sensitive” and “context-dependent” continue to appear. The DNA-as-blueprint statement presumes that DNA is context-independent, but we cannot divorce genes—whatever they are—from their context, since genes and environment, nature and nurture, are intertwined. (And it is even questioned if ‘genes’ are truly units of inheritance, see Fogle, 1990. Fogle, 2000 also argues to dispense with the concept of “gene” and that biologists should be using terms like intron, promoter region, and exon. Nevertheless, there is a huge disconnect with the term “gene” in molecular biology and classical genetics. Keller 2000 argues that there are still uses for the term “gene” and that we should not dispense with the term. I believe we should dispense with it.)

Susan Oyama (2000: 77) writes in her book The Ontogeny of Information:

Though a plan implies action, it does not itself act, so if the genes are a blueprint, something else is the constructor-construction worker. Though blueprints are usually contrasted with building materials, the genes are quite easily conceptualized as templates for building tools and materials; once so utilized, of course, they enter the developmental process and influence its course. The point of the blueprint analogy, though, does not seem to be to illuminate developmental processes, but rather to assume them and, in celebrating their regularity, to impute cognitive functions to genes. How these functions are exercised is left unclear in this type of metaphor, except that the genetic plan is seen in some peculiar way to carry itself out, generating all the necessary steps in the necessary sequence. No light is shed on multiple developmental possibilities, species-typical or atypical.

The Modern Synthesis is one of the causes for the genes-as-blueprints thinking; the Modern Synthesis has causation in biology wrong. Genes are not active causes, but they are passive templates, as argued by many authors. They, thus, cannot “cause” anything on their own.

In his 2017 book Dance to the Tune of Life: Biological Relativity, Denis Noble writes (pg 157):

As we saw earlier in this chapter, these triplet sequences are formed from any combination of the four bases U, C, A and G in RNA and T, C, A and G in DNA. They are often described as a genetic ‘code’, but it is important to understand that this usage of the word ‘code’ carries overtones that can be confusing.

A code was originally an intentional encryption used by humans to communicate. The genetic ‘code’ is not intentional in that sense. The word ‘code’ has unfortunately reinforced the idea that genes are active and even complete causes, in much the same was as a computer is caused to follow the instructions of a computer program. The more nuetral word ‘template’ would be better. Templates are used only when required (activated); they are not themselves active causes. The active causes lie within the cells themselves since they determine the expression patterns for the different cell types and states. These patterns are comminicated to the DNA by transcrption factors, by methylation patterns and by binding to the tails of histones, all of which influence the pattern and speed of transcription of different parts of the genome. If the word ‘instruction’ is useful here at all, it is rather that the cell instructs the genome. As Barbara McClintock wrote in 1984 after receiving her Nobel Prize, the genome is an ‘organ of the cell’, not the other way around.

Realising that DNA is under the control of the system has been reinforced by the discovery that cells use different start, stop and splice sites for producing different messenger RNAs from a single DNA sequence. This enables the same sequence to code different proteins in different cell types and under different conditions [here’s where context-dependency comes into play again].

Representing the direction of causality in biology the wrong way round is therefore confusing and has far-reaching conseqeunces. The causality is circular, acting both ways: passive causality by DNA sequences acting as otherwise inert templates, and active causality by the functional networks of interactions that determine how the genome is activated.

This takes care of the idea that DNA is a ‘code’. But what about DNA being a ‘blueprint’, that all of the information is contained in the DNA of the organism before conception? DNA is clearly not a ‘program’, in the sense that all of the information to construct the organism exists already in DNA. The complete cell is also needed, and its “complex structures are inherited by self-templating” (Noble, 2017: 161). Thus, the “blueprint” is the whole cell, not just the genome itself (remember that the genome is an organ of the cell).

Lastly, GWA studies have been all the rage recently. However, there is only so much we can learn just from association studies, before we need to turn to the physiological sciences for functional analyses. Indeed, Denis Noble (2018) writes in a new editorial:

As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (13, 21). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3).

[…]

The results of GWAS do not reveal the secrets of life, nor have they delivered the many cures for complex diseases that society badly needs. The reason is that association studies do not reveal biological mechanisms. Physiology does. Worse still, “the more data, the more arbitrary, meaningless and useless (for future action) correlations will be found in them” is a necessary mathematical statement (3).

Nor does applying a highly restricted DNA sequence-based interpretation of evolutionary biology, and its latest manifestation in GWAS, to the social sciences augur well for society.

It is further worth noting that there is no privileged level of causation in biological systems (Noble, 2012)—a priori, there is no justification to privilege one system over another in regard to causation, so saying that one level of the organism is “higher” than another (for instance, saying that genes are, and should be, privileged over the environment or any other system in the organism regarding causation) is clearly false, since there is upwards and downwards causation, influencing all levels of the system.

In sum, it is highly misleading to refer to DNA as “blueprints”, a “code”, or a “recipe.” Referring to DNA in this way means that one presumes that DNA can be divorced from its context—that it does not work together with the environment. As I have argued in the past, association studies will not elucidate genetic mechanisms, nor will heritability estimates (Richardson, 2012). We need physiological testing for these functional analyses, and association studies like GWAS and even heritability estimates don’t tell us this type of information (Panofsky, 2014). So, it seems, that what Plomin et al are looking for that they assume are “in the genes”, are not there, because they use a false model of the gene (Burt, 2015; Richardson, 2017). Genes are resources—templates to be used by and for the system—not causes of traits and development. They can account for differences in variation, but cannot be said to be the origin of trait differences. Genes can be said to be difference makers, but knowing whether or not they are difference makers for behavior, in my opinion, cannot be known.

(For further information on genes and what they do, reach Chapters Four and Five of Ken Richardson’s book Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. Plomin himself seems to be a reductionist, and Richardson took care of that paradigm in his book. Lickliter (2018) has a good review of the book, along with critiques of the reductionist paradigm that Plomin et al follow.)

Otzi Man’s Last Meal and the Diet of Neanderthals

1100 words

The debate on what type of diet in regard to macronutrient differences rages on. Should we eat high carb, low fat (HCLF)? Or low carb, high fat (LCHF) or something in between? The answer rests on, of course, the type of diets that our ancestors ate—both immediate and in the distant past. In the 1990s, a frozen human was discovered in the Otzal mountains, which gave him the name “Otzi man.” About 5,300 years ago, he was frozen in the mountains. The contents of his stomach have been analyzed in the 27 years since the discovery of Otzi, but an in-depth analysis was not possible until now.

A new paper was published recently, which analyzed the stomach contents of Otzi man (Maixner et al, 2018). There is one reason why it took so long to analyze the contents of his stomach: the authors state that, due to mummification, his stomach moved high up into his rib cage. The Iceman was “omnivorous, with a diet consisting both of wild animal and plant material” (Maixner et al, 2018: 2). They found that his stomach had a really high fat content, with “the presence of ibex and red deer” (pg 3). He also “consumed either fresh or dried wild meat“, while “a slow drying or smoking of the meat over the fire would explain the charcoal particles detected previously in the lower intestine content.“(pg 5).

The extreme alpine environment in which the Iceman lived and where he have been found (3,210 m above sea level) is particularly challenging for the human physiology and requires optimal nutrient supply to avoid rapid starvation and energy loss [31]. Therefore, the Iceman seemed to have been fully aware that fat displays an excellent energy source. On the other hand, the intake of animal adipose tissue fat has a strong correlation with increased risk of coronary artery disease [32]. A high saturated fats diet raises cholesterol levels in the blood, which in turn can lead to atherosclerosis. Importantly, computed tomography scans of the Iceman showed major calcifications in arteria and the aorta indicating an already advanced atherosclerotic disease state [33]. Both his high-fat diet and his genetic predisposition for cardiovascular disease [34] could have significantly contributed to the development of the arterial calcifications.  Finally, we could show that the Iceman either consumed fresh or dried meat. Drying meat by smoking or in the open air are simple but highly effective methods for meat preservation that would have allowed the Iceman to store meat long term on journeys or in periods of food scarcity. In summary, the Iceman’s last meal was a well-balanced mix of carbohydrates, proteins, and lipids, perfectly adjusted to the energetic requirements of his high-altitude trekking. (Maixner et al, 2018: 5)

They claim that “the intake of animal adipose tissue fat has a strong correlation with increased risk of coronary artery disease“, of course, citing a paper that the AHA is involved in (Sacks et al, 2017) which says that “Randomized clinical trials showed that polyunsaturated fat from vegetable oils replacing saturated fats from dairy and meat lowers CVD.” This is nonsense, because dietary fat guidelines have no evidence (Harcombe et al, 2016; Harcombe, Baker, and Davies, 2016; Harcombe, 2017). Saturated fat consumption is not even associated with all-cause mortality, type II diabetes, ischemic stroke, CVD (cardiovascular disease) and CHD (coronary heart disease) (de Sousa et al, 2015).

Thus, if anything, what contributed to Otzi man’s arterial calcification seems to be grains/carbohydrates (see DiNicolantonio et al, 2017), not animal fat. Fats, at 9 kcal per gram, were better for Otzi to consume, as he got more kcal for his buck; eating a similar portion in carbohydrates, for example, would have meant that Otzi would have had to spend more time eating (since carbs have less than half the energy that animal fat does). Since his stomach had ibex (a type of goat) and red deer, it’s safe to say that many of his meals consisted mainly of animal fat, protein with some cereals and plants thrown in (he was an omnivore).

We can then contrast the findings of Otzi’s diet with that of Neanderthals. It has been estimated that, during glacial winters, Neanderthals would have consumed around 74-85 percent of their diet from animal fat when there were no carbohydrates around, with the rest coming from protein (Ben-Dor, Gopher, and Barkai, 2016). Furthermore, based on contemporary data from polar peoples, it is estimated that Neanderthals required around 3,360 to 4,480 kcal per day to winter foraging and cold resistance (Steegmann, Cerny, and Holliday, 2002). The upper-limit for protein intake for Homo sapiens is 4.0 g/bw/day while for erectus it is 3.9 g/bw/day (Ben-Dor et al, 2011), and so this shows that Neanderthals consumed a theoretical upper-maximum of protein due to their large body size. So we can assume that Neanderthals consumed somewhere near 3800 kcal per day. The average Neanderthal is said to have consumed about 292 grams of protein per day, or 1,170 kcal (with a lower end of 985 kcal and an upper end of 1,170 at the high end) (Ben-Dor, Gopher, and Barkai, 2016: 370).

Then if we further assume that Neanderthals consumed no carbohydrates during glacial winters, that leaves protein as the main source of energy, since the large game the Neanderthals hunted were not around. Thus, Neanderthals would have consumed between 2,812 and 3,230 kcal from animal fat with the rest coming from protein. We can also put this into perspective. The average American man consumes about 100 grams of protein per day, while consuming 2,195 kcal per day (Ford and Dietz, 2013). For these reasons, and more, I argued that Neanderthals were significantly stronger than Homo sapiens, and this does have implications for racial differences in athletic ability.

In sum, the last meal of Otzi man is now known. Of course, this is a case of n = 1, so we should not draw too large a conclusion from this, but it is interesting. I don’t see why the composition of the diets of any of Otzi’s relatives would have been any different (or that the contents of his normal diet would have been any different). He ate a diet high in animal fat like Neanderthals, but unlike Neanderthals, they ate a more cereal-based diet which may have contributed to Otzi’s CVD and arterial calcification. We can learn a lot about ourselves and our ancestors through the analysis of their stomach contents (if possible) and teeth (if possible), and maybe even genomes (Berens, Cooper, and Lachance, 2017) because if we learn what they ate then we can maybe begin to shift dietary advice to a more ‘natural’ way and avoid diseases of civilization. But, we have not had time to adapt to the new obesogenic environments we have constructed for ourselves. It’s due to this that we have an obesity epidemic, and by studying the diets of our ancestors, we can then begin to remedy our obesity and other health problems.