NotPoliticallyCorrect

Home » Evolution

Category Archives: Evolution

Advertisements

Just-so Stories: The Brain Size Increase

1600 words

The increase in brain size in our species over the last 3 million years has been the subject of numerous articles and books. Over that time period, brain size increased from our ancestor Lucy, all the way to today. Many stories are proposed to explain how and why it exactly happened. The explanation is the same ol’ one: Those with bigger heads, and therefore bigger brains had more children and passed on their “brain genes” to the next generation until all that was left was bigger-brained individuals of that species. But there is a problem here, just like with all just-so stories. How do we know that selection ‘acted’ on brain size and thusly “selected-for” the ‘smarter’ individual?

Christopher Badcock, an evolutionary psychologist, as an intro to EP published in 2001, where he has a very balanced take on EP—noting its pitfalls and where, in his opinion, EP is useful. (Most may know my views on this already, see here.) In any case, Badcock cites R.D. Martin (1996: 155) who writes:

… when the effects of confounding variables such as body size and socio-economic status are excluded, no correlation is found between IQ and brain size among modern humans.

Badcock (2001: 48) also quotes George Williams—author of Adaptation and Natural Selection (1966; the precursor to Dawkins’ The Selfish Gene) where he writes:

Despite the arguments that have been advanced, I cannot readily accept the idea that advanced mental capabilities have ever been directly favored by selection. There is no reason for believing that a genius has ever been likely to leave more children than a man of somewhat below average intelligence. It has been suggested that a tribe that produces an occasional genius for its leadership is more likely to prevail in competition with tribes that lack this intellectual resource. This may well be true in the sense that a group with highly intelligent leaders is likely to gain political domination over less gifted groups, but political domination need not result in genetic domination, as indicated by the failure of many a ruling class to maintain its members.

In Adaptation and Natural Selection, Williams was much more cautious than adaptationists today, stating that adaptationism should be used only in very special cases. Too bad that adaptationists today did not get the memo. But what gives? Doesn’t it make sense that the “more intelligent” human 2 mya would be more successful when it comes to fitness than the “less intelligent” (whatever these words mean in this context) individual? Would a pre-historic Bill Gates have the most children due to his “high IQ” as PumpkinPerson has claimed in the past? I doubt it.

In any case, the increase in brain size—and therefore increase in intellectual ability in humans—has been the last stand for evolutionary progressionists. “Look at the increase in brain size”, the progressionist says “over the past 3mya. Doesn’t it look like there is a trend toward bigger, higher-quality brains in humans as our skills increased?” While it may look like that on its face, in fact, the real story is much more complicated.

Deacon (1990a) notes many fallacies that those who invoke the brain size increase across evolutionary history make, including: the evolutionary progression fallacy; the bigger-is-smarter fallacy; and the numerology fallacy. The evolutionary progression fallacy is simple enough. Deacon (1990a: 194) writes:

In theories of brain evolution, the concept of evolutionary progress finds implicit expression in the analysis of brain-size differences and presumed grade shifts in allometric brain/body size trends, in theories of comparative intelligence, in claims about the relative proportions of presumed advanced vs. primitive brain areas, in estimates of neural complexity, including the multiplication and differentiation of brain areas, and in the assessment of other species with respect to humans, as the presumed most advanced exemplar. Most of these accounts in some way or other are tied to problems of interpreting the correlates of brain size. The task that follows is to dispose of fallacious progressivist notions hidden in these analyses without ignoring the questions otherwise begged by the many enigmatic correlations of brain size in vertebrate evolution.

Of course, when it comes to the bigger-is-smarter fallacy, it’s quite obviously not true that bigger IS always better when it comes to brain size, as elephants and whales have larger brains than humans (also see Skoyles, 1999). But what they do not have more of than humans is cortical neurons (see Herculano-Houzel, 2009). Decon (1990a: 201) describes the numerology fallacy:

Numerology fallacies are apparent correlations that turn out to be artifacts of numerical oversimplification. Numerology fallacies in science, like their mystical counterparts, are likely to be committed when meaning is ascribed to some statistic merely by virtue of its numeric similarity to some other statistic, without supportive evidence from the empirical system that is being described.

While Deacon (1990a: 232) concludes that:

The idea, that there have been progressive trends of brain evolution, that include changes in the relative proportions of different structures (i.e., enlarging more “advanced” areas with respect to more primitive areas) and increased differentiation, interconnection, and overall complexity of neural circuits, is largely an artifact of misunderstanding the complex internal correlates of brain size. … Numerous statistical problems, arising from the difficulty of analyzing a system with so many interdependent scaling relationships, have served to reinforce these misconceptions, and have fostered the tacit assumption that intelligence, brain complexity, and brain size bear a simple relationship to one another.

Deacon (1990b: 255) notes how brains weren’t directly selected for, but bigger bodies (bigger bodies means bigger brains), and this does not lean near the natural selection fallacy theory for trait selection since this view is of the organism, not its trait:

I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed.

Deacon (1990b: 697-698) notes that the large brain-to-body size ratio in humans compared to other primates is an illusion “a surface manifestation of a complex allometric reorganization within the brain” and that the brain itself is unlikely to be the object of selection. The correlated reorganization of the human brain, to Deacon, is what makes humans unique; not our “oversized” brains for our body. While Deacon (1990c) states that “To a great extent the apparent “progress” of mammalian brain evolution vanishes when the effects of brain size and functional specialization are taken into account.” (See also Deacon, 1997: chapter 5.)

So is there really progress in brain evolution, which would, in effect, lend credence to the idea that evolution is progressive? No, there is no progress in brain evolution; so-called size increases throughout human history are an artifact; when we take brain size and functional specialization into account (functional specialization is the claim that different areas in the brain are specialized to carry out different functions; see Mahon and Cantlon, 2014). Our brains only seem like they’ve increased; when we get down to the functional details, we can see that it’s just an artifact.

Skoyles and Sagan (2002: 240) note that erectus, for example, could have survived with much smaller brains and that the brain of erectus did not arise for the need for survival:

So how well equipped was Homo erectus? To throw some figures at you (calculations shown in the notes), easily well enough. Of Nariokotome boy’s 673 cc of cortex, 164 cc would have been prefrontal cortex, roughly the same as half-brained people. Nariokotome boy did not need the mental competence required by cotemporary hunter-gatherers. … Compared to that of our distant ancestors, Upper Paleolithic technology is high tech. And the organizational skills used in hunts greatly improved 400,000 years ago to 20,000 years ago. These skills, in terms of our species, are recent, occurring by some estimates in less than the last 1 percent of our 2.5 million year existence as people. Before then, hunting skills would have required less brain power, as they were less mentally demanding. If you do not make detailed forward plans, then you do not need as much mental planning abilities as those who do. This suggests that the brains of Homo erectus did not arise for reasons of survival. For what they did, they could have gotten away with much smaller, Daniel Lyon-sized brains.

In any case—irrespective of the problems that Deacon shows for arguments for increasing brain size—how would we be able to use the theory of natural selection to show what was selected-for, brain size or another correlated trait? The progressionist may say that it doesn’t matter which is selected-for, the brain size is still increasing even if the correlated trait—the free-rider—is being selected-for.

But, too bad for the progressionist: If the correlated non-fitness-enhancing trait is being selected-for and not brain size directly, then the progressionist cannot logically state that brain size—and along with it intelligence (as the implication always is)—is being directly selected-for. Deacon throws a wrench into such theories of evolutionary progress in regard to human brain size. Though, looking at erectus, it’s not clear that he really “needed” such a big brain for survival—it seems like he could have gotten away with a much smaller brain. And there is no reason, as George Williams notes, to attempt to argue that “high intelligence” was selected-for in our evolutionary history.

And so, Gould’s Full House argument still stands—there is no progress in evolution; bacteria occupy life’s mode; humans are insignificant to the number of bacteria on the planet, “big brains”, or not.

Advertisements

Rampant Adaptationism

1500 words

Adaptationism is the main school of evolutionary change, through “natural selection” (NS). That is the only way for adaptations to appear, says the adaptationist: traits that were conducive to reproductive success in past environments were selected-for their contribution to fitness and therefore became fixated in the organism in question. That’s adaptationism in a nutshell. It’s also vacuous and tells us nothing interesting. In any case, the school of thought called adaptationism has been the subject of much criticism, most importantly, Gould and Lewontin (1972), Fodor (2008) and Fodor and Piatteli-Palmarini (2010). So, I would say that adaptationism becomes “rampant” when clearly cultural changes are conflated as having an evolutionary history and are still around today due to being adaptations.

Take Bret Weinstein’s recent conversation with Richard Dawkins:

Weinstein: “Understood through the perspective of German genes, vile as these behaviors were, they were completely comprehensible at the level of fitness. It was abhorrent and unacceptable—but understandable—that Germany should have viewed its Jewish population as a source of resources if you viewed Jews as non-people. And the belief structures that cause people to step onto the battlefields and fight were clearly comprehensible as adaptations of the lineages in question.”

Dawkins: “I think nationalism may be an even greater evil than religion. And I’m not sure that it’s actually helpful to speak of it in Darwinian terms.”

I find it funny that Weinstein is more of a Dawkins-ist than Dawkins himself is (in regard to his “selfish gene theory”, see Noble, 2011). In any case, what a ridiculous claim. “Guys, the Nazis were bad because of their genes and their genes made them view Jews as non-people and resources. Their behaviors were completely understandable at the level of fitness. But, Nazis bad!”

What a ridiculous claim. I like how Dawkins quickly shot the bullshit down. This is just-so storytelling on steroids. I wonder what “belief structures that cause people to step onto battlefields” are “adaptations of the lineages in question”? Do German belief structure adaptations different from any other groups? Can one prove that there are “belief structures” that are “adaptations to the lineages in question”? Or is Weinstein just telling just-so stories—stories with little evidence and that “fit” and “make sense” with the data we have (despicable Nazi behavior towards Jews after WWI and before and during WWII).

There is a larger problem with adaptationism, though: adaptationist confuse adaptiveness with adaptation (a trait can be adaptive without being an adaptation), they overlook nonadaptationist explanations, and adaptationist hypotheses are hard to falsify since a new story can be erected to explain the feature in question if one story gets disproved. That’s the dodginess of adaptationism.

An adaptationist may look at an organism, look at its traits, then construct a story as to why they have the traits they do. They will attempt to think of its evolutionary history by thinking of the environment it is currently in and what the traits in question that it has are useful for now. But there is a danger here. We can create many stories for just one so-called adaptation. How do we distinguish between which stories explain the fixation of the trait and which do not? We can’t: there is no way for us to know which of the causal stories explains the fixation of the trait.

Gould and Lewontin (1972) fault:

the adaptationist programme for its failure to distinguish current utility from reasons for origin (male tyrannosaurs may have used their diminutive front legs to titillate female partners, but this will not explain why they got so small); for its unwillingness to consider alternatives to adaptive stories; for its reliance upon plausibility alone as a criterion for accepting speculative tales; and for its failure to consider adequately such competing themes as random fixation of alleles, production of nonadaptive structures by developmental correlation with selected features (allometry, pleiotropy, material compensation, mechanically forced correlation), the separability of adaptation and selection, multiple adaptive peaks, and current utility as an epiphenomenon of nonadaptive structures.

[…]

One must not confuse the fact that a structure is used in some way (consider again the spandrels, ceiling spaces, and Aztec bodies) with the primary evolutionary reason for its existence and conformation.

Of course, though, adaptationists (e.g., evolutionary psychologists) do confuse structure for function. This is fallacious reasoning. That a trait is useful in a current environment is in no way evidence that it is an adaptation nor is it evidence that that’s why the trait evolved (e.g., a trait being useful and adaptive in a current environment).

But there is a problem with looking to the ecology of the organism in question and attempting to construct historical narratives about the evolution of the so-called adaptation. As Fodor and Piatteli-Palmarini (2010) note, “if evolutionary problems are individuated post hoc, it’s hardly surprising that phenotypes are so good at solving them.” So of course if an organism fails to secure a niche then that means that the niche was not for that organism.

That organisms are so “fit” to their environment, like a puzzle piece to its surrounding pieces, is supposed to prove that “traits are selected-for their contribution to fitness in a given ecology”, and this is what the theory of natural selection attempts to explain. Organisms fit their ecologies because its their ecologies that “design” their traits. So it is no wonder that organisms and their environments have such a tight relationship.

Take it from Fodor and Piatelli-Palmarini (2010: 137):

You don’t, after all, need an adaptationist account of evolution in order to explain the fact that phenotypes are so often appropriate to ecologies, since, first impressions to the contrary notwithstanding, there is no such fact. It is just a tautology (if it isn’t dead) a creature’s phenotype is appropriate for its survival in the ecology that it inhabits.

So since the terms “ecology” and “phenotype” are interdefined, is it any wonder why an organism’s phenotype has such a “great fit” with its ecology? I don’t think it is. Fodor and Piatteli-Palmarini (2010) note how:

it is interesting and false that creatures are well adapted to their environments; on the other hand it’s true but not interesting that creatures are well adapted to their ecologies. What, them, is the interesting truth about the fitness of phenotypes that we require adaptationism in order to explain? We’ve tried and tried, but we haven’t been able to think of one.

So the argument here could be:

P1) Niches are individuated post hoc by reference to the phenotypes that live in said niche.
P2) If the organisms weren’t there, the niche would not be there either.
C) Therefore there is no fitness of phenotypes to lifestyles that explain said adaptation.

Fodor and Piatteli-Palmarini put it bluntly about how the organism “fits” to its ecology: “although it’s very often cited in defence of Darwinism, the ‘exquisite fit’ of phenotypes to their niches is either true but tautological or irrelevant to questions about how phenotypes evolve. In either case, it provides no evidence for adaptationism.”

The million-dollar question is this, though: what would be evidence that a trait is an adaptation? Knowing what we now know about the so-called fit to the ecology, how can we say that a trait is an adaptation for problem X when niches are individuated post hoc? That right there is the folly of adaptationism, along with the fact that it is unfalsifiable and leads to just-so storytelling (Smith, 2016).

Such stories are “plausible”, but that is only because they are selected to be so. When such adaptationism becomes entrenched in thought, many traits are looked at as adaptations and then stories are constructed as to how and why the trait became fixated in the organism. But, just like EP which uses the theory of natural selection as its basis, so too does adaptationism fail. Nevermind the problem of the fitting of species to ecologies to render evolutionary problems post hoc; nevermind the problem that there is no identifying criteria for identifying adaptations; do mind the fact that there is no possible way for natural selection to do what it does: distinguish between coextensive traits.

In sum, adaptationism is a failed paradigm and we need to dispense with it. The logical problems with it are more than enough to disregard it. Sure, the fitness of a phenotype, say, the claws of a mole do make sense in the ecology it is in. But we only claim that the claws of a mole are adaptations after the fact, obviously. One may say “It’s obvious that the claws of a mole are adaptations, look at how it lives!” But this betrays a notion that Gould and Lewontin (1972) made: do not confuse structure with an evolutionary reason for its existence, which, unfortunately, many people do (most glaringly, evolutionary psychologists). Weinstein’s ridiculous claims about Nazi actions during WWII are a great example of how rampant adaptationism has become: we can explain any and all traits as an adaptation, we just need to be creative with the stories we tell. But just because we can create a story that “makes sense” and explains the observation does not mean that the story is a likely explanation for the trait’s existence.

Just-so Stories: MCPH1 and ASPM

1350 words

Microcephalin, a gene regulating brain size, continues to evolved adaptively in humans” (Evans et al, 2005) “Adaptive evolution of ASPM, a major determinant of cerebral cortical size in humans” (Evans et al, 2004) are two papers from the same research team which purport to show that both MCPH1 and ASPM are “adaptive” and therefore were “selected-for” (see Fodor, 2008; Fodor and Piatteli-Palmarini, 2010 for discussion). That there was “Darwinian selection” which “operated on” the ASPM gene (Evans et al, 2004), that we identified it was selected, along with its functional effect is evidence that it was supposedly “selected-for.” Though, the combination of functional effect along with signs of (supposedly) positive selection do not license the claim that the gene was “selected-for.”

One of the investigators who participated in these studies was one Bruce Lahn, who stated in an interview that MCPH1 “is clearly favored by natural selection.” Evans et al (2005) show specifically that the variant supposedly under selection (MCPH1) showed lower frequencies in Africans and the highest in Europeans.

But, unfortunately for IQ-ists, neither of these two alleles are associated with IQ. Mekel-Boborov et al (2007: 601) write that their “overall findings suggest that intelligence, as measured by these IQ tests, was not detectably associated with the D-allele of either ASPM or Microcephalin.” Timpson et al (2007: 1036A) found “no meaningful associations with brain size and various cognitive measures, which indicates that contrary to previous speculations, ASPM and MCPH1 have not been selected for brain-related effects” in genotyped 9,000 genotyped children. Rushton, Vernon, and Bons (2007) write that “No evidence was found of a relation between the two candidate genes ASPM and MCPH1 and individual differences in head circumference, GMA or social intelligence.Bates et al’s (2008) analysis shows no relationship between IQ and MCPH1-derived genes.

But, to bring up Fodor’s critique, if MCPH1 is coextensive with another gene, and both enhance fitness, then how can there be direct selection on the gene in question? There is no way for selection to distinguish between the two linked genes. Take Mekel-Bobrov et al (2005: 1722) who write:

The recent selective history of ASPM in humans thus continues the trend of positive selection that has operated at this locus for millions of years in the hominid lineage. Although the age of haplogroup D and its geographic distribution across Eurasia roughly coincide with two important events in the cultural evolution of Eurasia—namely, the emergence and spread of domestication from the Middle East ~10,000 years ago and the rapid increase im population associated with the development of cities and written language 5000 to 6000 years ago around the Middle East—the signifigance of this correlation is not clear.

Surely both of these genetic variants have a hand in the dawn of these civilizations and behaviors of our ancestors; they are correlated, right? Though, they only did draw that from the research studies they reported on—these types of wild speculation are in the papers referenced above. Lahn and his colleagues, though, are engaging in very wild speculation—if these variants are under positive selection, that is.

So it seems that this research and the conclusions drawn from it are ripe for a just-so story. We need to do a just-so story check. Now let’s consult Smith’s (2016: 277-278) seven just-so story triggers:

1) proposing a theory-driven rather than a problem-driven explanation, 2) presenting an explanation for a change without providing a contrast for that change, 3) overlooking the limitations of evidence for distinguishing between alternative explanations (underdetermination), 4) assuming that current utility is the same as historical role, 5) misusing reverse engineering, 6) repurposing just-so stories as hypotheses rather than explanations, and 7) attempting to explain unique events that lack comparative data.

For example, take (1): a theory-driven explanation leads to a just-so story, as Shapiro (2002: 603) notes, “The theory-driven scholar commits to a sufficient account of a phenomenon, developing a “just so” story that might seem convincing to partisans of her theoretical priors. Others will see no more reason to believe it than a host of other “just so” stories that might have been developed, vindicating different theoretical priors.” That these two genes were “selected-for” means that, for Evans et al, it is a theory-driven explanation and therefore falls prey to the just-so story criticism.

Rasmus Nielsen (2009) has a paper on the thirty years of adaptationism after Gould and Lewontin’s (1972) Spandrels paper. In it, he critiques so-called examples of two genes being supposedly selected-for: a lactase gene, and MCPH1 and ASPM. Nielsen (2009) writes of MCPH1 and ASPM:

Deleterious mutations in ASPM and microcephalin may lead to reduced brain size, presumably because these genes are cell‐cycle regulators and very fast cell division is required for normal development of the fetal brain. Mutations in many different genes might cause microcephaly, but changes in these genes may not have been the underlying molecular cause for the increased brain size occurring during the evolution of man.

In any case, Currat et al (2006: 176a) show that “the high haplotype frequency, high levels of homozygosity, and spatial patterns observed by Mekel-Bobrov et al. (1) and Evans et al. (2) can be generated by demographic models of human history involving a founder effect out-of-Africa and a subsequent demographic or spatial population expansion, a very plausible scenario (5). Thus, there is insufficient evidence for ongoing selection acting on ASPM and microcephalin within humans.McGowen et al (2011) show that there is “no evidence to support an association between MCPH1 evolution and the evolution of brain size in highly encephalized mammalian species. Our finding of significant positive selection in MCPH1 may be linked to other functions of the gene.

Lastly, Richardson (2011: 429) writes that:

The force of acceptance of a theoretical framework for approaching the genetics of human intellectual differences may be assessed by the ease with which it is accepted despite the lack of original empirical studies – and ample contradictory evidence. In fact, there was no evidence of an association between the alleles and either IQ or brain size. Based on what was known about the actual role of the microcephaly gene loci in brain development in 2005, it was not appropriate to describe ASPM and microcephalin as genes controlling human brain size, or even as ‘brain genes’. The genes are not localized in expression or function to the brain, nor specifically to brain development, but are ubiquitous throughout the body. Their principal known function is in mitosis (cell division). The hypothesized reason that problems with the ASPM and microcephalin genes may lead to small brains is that early brain growth is contingent on rapid cell division of the neural stem cells; if this process is disrupted or asymmetric in some way, the brain will never grow to full size (Kouprina et al, 2004, p. 659; Ponting and Jackson, 2005, p. 246)

Now that we have a better picture of both of these alleles and what they are proposed to do, let’s now turn to Lahn’s comments on his studies. Lahn, of course, commented on “lactase” and “skin color” genes in defense of his assertion that such genes like ASPM and MCPH1 are linked to “intelligence” and thusly were selected-for just that purpose. However, as Nielsen (2009) shows, that a gene has a functional effect and shows signs of selection does not license the claim that the gene in question was selected-for. Therefore, Lahn and colleagues engaged in fallacious reasoning; they did not show that such genes were “selected-for”, while even studies done by some prominent hereditarians did not show that such genes were associated with IQ.

Like what we now know about the FOXP2 gene and how there is no evidence for recent positive or balancing selection (Atkinson et al, 2018), we can now say the same for such other evolutionary just-so stories that try to give an adaptive tinge to a trait. We cannot confuse selection and function as evidence for adaptation. Such just-so stories, like the one described above along with others on this blog, can be told about any trait or gene and explain why it was selected and stabilized in the organism in question. But historical narratives may be unfalsifiable. As Sterelny and Griffiths write in their book Sex and Death:

Whenever a particular adaptive story is discredited, the adaptationist makes up a new story, or just promises to look for one. The possibility that the trait is not an adaptation is never considered.

The Modern Synthesis vs the Extended Evolutionary Synthesis

2050 words

The Modern Synthesis (MS) has entrenched evolutionary thought since its inception in the mid-1950s. The MS is the integreation of Darwinian natural selection and Mendelian genetics. Key assumptions include “(i) evolutionarily significant phenotypic variation arises from genetic mutations that occur at a low rate independently of the strength and direction of natural selection; (ii) most favourable mutations have small phenotypic effects, which results in gradual phenotypic change; (iii) inheritance is genetic; (iv) natural selection is the sole explanation for adaptation; and (v) macro-evolution is the result of accumulation of differences that arise through micro-evolutionary processes” (Laland et al, 2015).

Laland et al (2015) even have a helpful table on core assumptions of both the MS and Extended Evolutionary Synthesis (EES). The MS assumptions are on the left while the EES assumptions are on the right.

MSEES

Darwinian cheerleaders, such as Jerry Coyne and Richard Dawkins, would claim that neo-Darwinisim can—and already does—account for the assumptions of the EES. However, it is clear that that claim is false. At its core, the MS is a gene-centered perspective whereas the EES is an organism-centered perspective.

To the followers of the MS, evolution occurs through random mutations and change in allele frequencies which then get selected for by natural selection since they lead to an increase in fitness in that organism, and so, that trait that the genes ’cause’ then carry on to the next generation due to its contribution to fitness in that organism. Drift, mutation and gene flow also account for changes in genetic frequencies, but selection is the strongest of these modes of evolution to the Darwinian. The debate about the MS and the EES comes down to gene-selectionism vs developmental systems theory.

On the other hand, the EES is an organism-centered perspective. Adherents to the EES state that the organism is inseparable from its environment. Jarvilehto (1998) describes this well:

The theory of the organism-environment system (Jairvilehto, 1994, 1995) starts with the proposition that in any functional sense organism and environment are inseparable and form only one unitary system. The organism cannot exist without the environment and the environment has descriptive properties only if it is connected to the organism.

At its core, the EES makes evolution about the organism—its developmental system—and relegates genes, not as active causes of traits and behaviors, but as passive causes, being used by and for the system as needed (Noble, 2011; Richardson, 2017).

One can see that the core assumptions of the MS are very much like what Dawkins describes in his book The Selfish Gene (Dawkins, 1976). In the book, Dawkins claimed that we are what amounts to “gene machines”—that is, just vehicles for the riders, the genes. So, for example, since we are just gene machines, and if genes are literally selfish “things”, then all of our actions and behaviors can be reduced to the fact that our genes “want” to survive. But the “selfish gene” theory “is not even capable of direct empirical falsification” (Noble, 2011) because Richard Dawkins emphatically stated in The Extended Phenotype (Dawkins, 1982: 1) that “I doubt that there is any experiment that could prove my claim” (quoted in Noble, 2011).

Noble (2011) goes on to discuss Dawkins’ view that on genes:

Now they swarm in huge colonies, safe inside gigantic lumbering robots, sealed off from the outside world, communicating with it by tortuous indirect routes, manipulating it by remote control. They are in you and me; they created us, body and mind; and their preservation is the ultimate rationale for our existence. (1976, 20)

Noble then switches the analogy: Noble likens genes, not as having a “selfish” attribute, but to that of being “prisoners”, stuck in the body with no way of escape. Noble then says that, since there is no experiment to distinguish between the two views (which Dawkins admitted). Noble then concludes that, instead of being “selfish”, the physiological sciences look at genes as “cooperative”, since they need to “cooperate” with the environment, other genes, gene networks etc which comprise the whole organism.

In his 2018 book Agents and Goals in Evolution Samir Okasha distinguishes between type I and type II agential thinking. “In type 1 [agential thinking], the agent with the goal is an evolved entity, typically an individual organism; in type 2, the agent is ‘mother nature’, a personification of natural selection” (Okasha, 2018: 23). An example of type I agential thinking is Dawkins’ selfish genes, while type II is the personification that one imputes onto natural selection—which Okasha states that this type of thinking “Darwin was himself first to employ” (Okasha, 2018: 36) it.

Okasha states that each gene’s ultimate goal is to outcompete other genes—for that gene in question to increase its frequency in the organism. They also can have intermediate goals which is to maximize fitness. Okasha gives three rationales on what makes something “an agent”: (1) goal-directedness; (2) behavioral flexibility; and (3) adaptedness. So the “selfish” element “constitutes the strongest argument for agential thinking” of the genes (Okasha, 2018: 73). However, as Denis Noble has tirelessly pointed out, genes (DNA sequences) are inert molecules (and are one part of the developing system) and so do not show behavioral flexibility or goal-directedness. Genes can (along with other parts of the system working in concert with them) exert adaptive effects on the phenotype, though when genes (and traits) are coextensive, selection cannot distinguish between the fitness-enhancing trait and the free-riding trait so it only makes logical sense to claim that organisms are selected, not any individual traits (Fodor and Piatteli-Palmarini, 2010a, 2010b).

It is because of this, that the Neo-Darwinian gene-centric paradigm has failed, and is the reason why we need a new evolutionary synthesis. Some only wish to tweak the MS a bit in order to allow what the MS does not incorporate in it, but others want to overhaul the entire thing and extend it.

Here is the main reason why the MS fails: there is absolutely no reason to privilege any level of the system above any other! Causation is multi-level and constantly interacting. There is no a priori justification for privileging any developmental variable over any other (Noble, 2012, 2017). Both downward and upward causation exists in biological systems (which means that molecules depend on organismal context). The organism also able to control stochasticity—which is “used to … generate novelty” (Noble and Noble, 2018). Lastly, there is the creation of novelty at new levels of selection, like with how the organism is an active participant in the construction of the environment.

Now, what does the EES bring that is different from the MS? A whole bunch. Most importantly, it makes a slew of novel predictions. Laland et al (2016) write:

For example, the EES predicts that stress-induced phenotypic variation can initiate adaptive divergence in morphology, physiology and behaviour because of the ability of developmental mechanisms to accommodate new environments (consistent with predictions 1–3 and 7 in table 3). This is supported by research on colonizing populations of house finches [68], water fleas [132] and sticklebacks [55,133] and, from a more macro-evolutionary perspective, by studies of the vertebrate limb [57]. The predictions in table 3 are a small subset of those that characterize the EES, but suffice to illustrate its novelty, can be tested empirically, and should encourage deriving and testing further predictions.

[Table 3]

mseespred

There are other ways to verify EES predictions, and they’re simple and can be done in the lab. In his book Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics, philosopher of biology Jan Baedke notes that studies of epigenetic processes which are induced in the lab and those that are observed in nature are similar in that they share the same methodological framework. So we can use lab-induced epigenetic processes to ask evolutionary questions and get evolutionary answers in an epigenetic framework. There are two problems, though. One, that we don’t know whether experimental and natural epigenetic inducements will match up; and two we don’t know whether or not these epigenetic explanations that focus on proximate causes and not ultimate causes can address evolutionary explananda. Baedke (2018: 89) writes:

The first has been addressed by showing that studies of epigenetic processes that are experimentally induced in the lab (in molecular epigenetics) and those observed in natural populations in the field (in ecological or evolutionary epigenetics) are not that different after all. They share a similar methodological framework, one that allows them to pose heuristically fruitful research questions and to build reciprocal transparent models. The second issue becomes far less fundamental if one understands the predominant reading of Mayr’s classical proximate-ultimate distinction as offering a simplifying picture of what (and how) developmental explanations actually explain. Once the nature of developmental dependencies has been revealed, the appropriateness of developmentally oriented approaches, such as epigenetics, in evolutionary biology is secured.

Further arguments for epigenetics from an evolutionary approach can be found in Richardson’s (2017) Genes, Brains, and Human Potential (chapter 4 and 5) and Jablonka and Lamb’s (2005) Evolution in Four Dimensions. More than genes alone are passed on and inherited, and this throws a wrench into the MS.

Some may fault DST for not offering anything comparable to Darwinisim, as Dupre (2003: 37) notes:

Critics of DST complain that it fails to offer any positive programme that has achievements comparable to more orthodox neo-Darwinism, and so far this complaint is probably justified.

But this is irrelevant. For if we look at DST as just a part of the whole EES programme, then it is the EES that needs to—and does—“offer a positive programme that has achievements comparable to more orthodox neo-Darwinism” (Dupre, 2003: 37). And that is exactly what the EES does: it makes novel predictions; it explains what needs to be explained better than the MS; and the MS has shown to be incoherent (that is, there cannot be selection on only one level; there can only be selection on the organism). That the main tool of the MS (natural selection) has been shown by Fodor to be vacuous and non-mechanistic is yet another strike against it.

Since DST is a main part of the EES, and DST is “a wholeheartedly epigenetic approach to development, inheritance and evolution” (Griffiths, 2015) and the EES incorporates epigenetic theories, then the EES will live or die on whether or not its evolutionary epigenetic theories are confirmed. And with the recent slew of books and articles that attest to the fact that there is a huge component to evolutionary epigenetics (e.g., Baedke, 2018; Bonduriansky and Day, 2018; Meloni, 2019), it is most definitely worth seeing what we can find in regard to evolutionary epigenetics studies, since epigenetic changes induced in the lab and those that are observed in natural populations in nature are not that different. This can then confirm or deconfirm major hypotheses of the EES—of which there are many. It is time for Lamarck to make his return.

It is clear that the MS is lacking, as many authors have pointed out. To understand evolutionary history and why organisms have the traits they do, we need much more than the natural selection-dominated neo-Darwinian Modern Synthesis. We need a new synthesis (which has been formulated for the past 15-20 years) and only through this new synthesis can we understand the hows and whys. The MS was good when we didn’t know any better, but the reductionism it assumes is untenable; there cannot be any direct selection on any level (i.e., the gene) so it is a nonsensical programme. Genes are not directly selected, nor are traits that enhance fitness. Whole organisms and their developmental systems are selected and propagate into future generations.

The EES (and DST along with it) hold right to the causal parity thesis—“that genes/DNA play an important role in development, but so do other variables, so there is no reason to privilege genes/DNA above other developmental variables.” This causal parity between all tools of development is telling: what is selected is not just one level of the system, as genetic reductionists (neo-Darwinists) would like to believe; it occurs on the whole organism and what it interacts with (the environment); environments are inherited too. Once we purge the falsities that were forced upon us by the MS in regard to organisms and their relationship with the environment and the MS’s assumptions about evolution as a whole, we can then truly understand how and why organisms evolve the phenotypes they do; we cannot truly understand the evolution of organisms and their phenotypes with genetic reductionist thinking with sloppy logic. So who wins? The MS does not, since it has causation in biology wrong. This only leaves us with the EES as the superior theory, predictor, and explainer.

Book Review: “Lamarck’s Revenge”

3500 words

I recently bought Lamarck’s Revenge by paleobiologist Peter Ward (2018) because I went on a trip and needed something to read on the flight. I just finished the book the other day and I thought that I would give a review and also discuss Coyne’s review of the book since I know he is so uptight about epigenetic theories, like that of Denis Noble and Jablonka and Lamb. In Lamarck’s Revenge, Ward (2018) purports to show that Lamarck was right all along and that the advent of the burgeoning field is “Lamarck’s revenge” for those who—in the current day—make fun of his theories in intro biology classes. (When I took Bio 101, the professor made it a point to bring up Lamarck and giraffe necks as a “Look at this wrong theory”, nevermind the fact that Darwin was wrong too.) I will go chapter-by-chapter, give a brief synopsis of each, and then discuss Coyne’s review.

In the introduction, Ward discusses some of the problems with Darwinian thought and current biological understanding. The current neo-Darwinian Modern Synthesis states that what occurs in the lifetime of the organism cannot be passed down to further generations—that any ‘marks’ on the genome are then erased. However, recent research has shown that this is not the case. Numerous studies on plants and “simpler” organisms refute the notion, though for more “complex” organisms it has yet to be proved. However, that this discussion is even occurring is proof that we are heading in the right direction in regard to a new synthesis. In fact, Jablonka and Lamb (2005) showed in their book Evolution in Four Dimensions, that epigenetic mechanisms can and do produce rapid speciation—too quick for “normal” Darwinian evolution.

Ward (2018: 3-4) writes:

There are good times and bad times on earth, and it is proposed here that dichotomy has fueled a coupling of times when evolution has been mainly through Darwinian evolution and others when Lamarckian evolution has been dominant. Darwinian in good times, Lamarckian in bad, when bad can be defined as those times when our environments turn topsy-turvy, and do so quickly. When an asteroid hits the planet. When giant volcanic episodes create stagnant oceans. When a parent becomes a sexual predator. When our industrial output warms the world. When there are six billion humans and counting.

These examples are good—save the one about when a parent becomes a sexual predator (but if we accept the thesis that what we do  and what happens to us can leave marks on our DNA that don’t change it but are passed on then it is OK)—and they all point to one thing: when the environment becomes ultra-chaotic. When such changes occur in the environment, that organism needs a physiology that is able to change on-demand to survive (see Richardson, 2017).

Ward (2018: 8) then describes Lamarck’s three-step process:

First, an animal experienced a radical change of the environment aroujnd it. Second, the initial response to the environmental change was some new kind of behavior by that of the animal (or whole species). Third, the behavioral change was followed by morphological change in subsequent generations.

Ward then discusses others before Darwin—Darwin’s grandfather Erasmus, for instance—who had theories of evolution before Darwin. In any case, we went from a world in which a God created all to a world where everything we see was created by natural processes.

Then in Chapter 2, Ward discusses Lamarck and Darwin and each of their theories in turn. (Note that Darwin did have Lamarckian views too.) Ward discusses the intellectual dual between Lamarck and Georges Cuvier, the father of the field of comparative anatomy—he studied mass extinctions. At Lamarck’s funeral, Cuvier spoke bad about Lamarck and buried his theories. (See Cuvier’s (1836) Elegy of Lamarck.) These types of arguments between academics have been going on for hundreds of years—and they will not stop any time soon.

In Chapter 3 Ward discusses Darwin’s ideas all the way to the Modern Synthesis, discussing how Darwin formulated his theory of natural selection, the purported “mechanism of evolution.” Ward discusses how Darwin at first rejected Lamarck’s ideas but then integrated them into future editions of On the Origin. We can think of this scenario: Imagine any environment and organisms in it. The environment rapidly shifts to where it is unrecognizable. The organisms in that environment then need to either change their behavior (and reproduce) or die. Now, if there were no way for organisms to change, say, their physiology (since physiology is dependent on what is occurring in the outside environment), then the species would die and there would be no evolution. However, the advent of evolved physiologies changed that. Morphologic and physiologic plasticity can and does help organisms survive in new environments—environments that are “new” to the parental organism—and this is a form of Lamarckism (“heritable epigenetics” as Ward calls it).

Chapter 4 discusses epigenetics and a newer synthesis. In the beginning of the chapter, Ward discusses a study he was a part of (Vandepas, et al, 2016). (Read Ward’s Nautilus article here.)

They studied two (so-called) different species of nautilus—one, nautilus pampilus, widespread across the Pacific and Indian Oceans and two, Nautilus stenomphalus which is only found at the Great Barrier Reef. Pompilus has a hole in the middle of its shell, whereas stenomphalus has a plug in the middle. Both of these (so-called) species have different kinds of anatomy—Pompilus has a hood covered with bumps of flesh whereas stenomphalus‘ hood is filled with projections of moss-like twig structures. So over a thirty-day period, they captured thirty nautiluses and snipped a piece of their tentacles and sequences the DNA found in it. They found that the DNA of these two morphologically different animals was the same. Thus, although the two are said to be different species based on their morphology, genetically they are the same species which leads Ward (2018: 52) to claim “that perhaps there are fewer, not more, species on Earth than science has defined.” Ward (2018: 53) cites a recent example—the fact that the Columbian and North American wooly mammoths “were genetically the same but the two had phenotypes determined by environment” (see Enk et al, 2011).

Now take Ward’s (2018: 58) definition of “heritable epigenetics”:

In heritable epigenetics, we pass on the same genome, but one marked (mark is the formal term for the place that a methyl molecule attaches to one nucleotide, a rung in the ladder of DNA) in such a way that the new organism soon has its own DNA swarmed by these new (and usually unwelcome) additions riding on the chromosomes. The genotype is not changed, but the genes carrying the new, sucker-like methyl molecules change the workings of the organism to something new, such as the production (or lack thereof) of chemicals necessary for our good health, or for how some part of the body is produced.

Chapter 5 discusses different environments in the context of evolutionary history. Environmental catastrophes that lead to the decimation of most life on the planet are the subject—something that Gould wrote about in his career (his concept of contingency in the evolutionary process). Now, going back to Lamarck’s dictum (first an environmental change, second a change in behavior, and third a change in phenotype), we can see that these kinds of processes were indeed imperative in the evolution of life on earth. Take the asteroid impact (K-Pg extinction; Cretaceous-Paleogene) that killed off the dinosaurs and threw tons of soot into the air, blocking out the sun making it effectively night (Schulte et al, 2010). All organisms that survived needed to eat. If the organism only ate in the day time, it would then need to eat at night or die. That right there is a radical environmental change (step 1) and then a change in behavior (step 2) which would eventually lead to step 3.

In Chapter 6, Ward discusses epigenetics and the origins of life. The main subject of the chapter is lateral gene transfer—the transmission of different DNA between genomes. Hundreds or thousands of new genes can be inserted into an organism and effectively change the morphology, it is a Lamarckian mechanism. Ward posits that there were many kinds of “genetic codes” and “metabolisms” throughout earth’s history, even organisms that were “alive” but were not capable of reproducing and so they were “one-offs.” Ward even describes Margulis’ (1967) theory of endosymbiosis as “a Lamarckian event“, which even Margulis accepts. Thus, the evolution of organisms is possible through lateral gene transfer and is another Lamarckian mechanism.

Chapter 7 discusses epigenetics and the Cambrian explosion. Ward cites a Creationist who claims that there has not been enough time since the 500 million year explosion to explain the diversity of body plans since then. Stephen Jay Gould wrote a whole book on this—Wonderful Life. It is true that Darwinian theory cannot explain the diversity of body plans, nor even the diversity of species and their traits (Fodor and Piatelli-Palmarini, 2010), but this does not mean that Creationism is true. If we are discussing the diversification of organismal life after mass extinctions, then Darwinian evolution cannot have possibly played a role in the survival of species—organisms with adaptive physiologies would have had a better chance of surviving in these new, chaotic environments.

It is posited here that four different epigenetic mechanisms presumably contributed to the great increase in both the kinds of species and the kinds of morphologies that distinguished them that together produced the Cambrian explosion as we currently know it: the first, now familiar, methylation; second, small RNA silencing; third, changes in the histones, the scaffolding that dictates the overall shape of a DNA molecule; and, finally, lateral gene transfer, which has recently been shown to work in animals, not just microbes. (Ward, 2018: 113)

Ginsburg and Jablonka (2010) state that “[associative] learning-based diversification was
accompanied by neurohormonal stress, which led to an ongoing destabilization and re-patterning of the epigenome, which, in turn, enabled further morphological, physiological, and behavioral diversification.” So associative learning, according to Ginsburg and Jablonka, was the driver of the Cambrian explosion. Ward (2018: 115) writes:

[The paper by Ginsburg and Jablonka] says that changes of behavior by both animal predators and animal prey began as an “arms race” in not just morphology but behavior. Learning how to hunt or flee; detecting food and mats and habitats at a distance from chemical senses of smell or vision, or from deciphering vibrations coming through water. Yet none of that would matter if the new behaviors and abilities were not passed on. As more animal body plans and the species they were composed of appeared, ecological communities changed radically and quickly. The epigenetic systems in snimals were, according to the authors, “destabilized,” andin reordering them it allowed new kinds of morphology, physiology, and again behavior, ans amid this was the ever-greater use of powerful hormone systems. Seeinf an approaching predator was not enough. The recognition of imminent danger would only save an animal’s life if its whole body was alerted and put on a “war footing” by the flooding of the creature with stress hormones. Poweful enactors of action. Over time, these systems were made heritable and, according to the authors, the novel evolution of fight or flight chemicals would have greatly enhanced survivability and success of early animals “enabled animals to exploit new niches, promoted new types of relations and arms races, and led to adaptive repsonses that became fixed through genetics.”

That, and vision. Brains, behavior, sense organs and hormones are tied to the nervous system to the digestive system. No single adaption led to animal success. It was the integration of these disparate systems into a whole that fostered survivability, and fostered the rapid evolution of new kinds of animals during the evolutionary fecund Cambrian explosion.

So, ever-changing environments are how physiological systems evolved (see Richardson, 2017: Chapters 4 and 5). Therefore, if the environment were static, then physiologies would not have evolved. Ever-changing environments were imperative to the evolution of life on earth. For if this were not the case, organisms with complex physiologies (note that a physiological system is literally a whole complex of cells) would never have evolved and we would not be here.

In chapter 8 Ward discusses epigenetic processes before and after mass extinctions. He states that, to mass extinction researchers, there are 3 ways in which mass extinction have occurred: (1) asteroid or comet impact; (2) greenhouse mass extinction events; and (3) glaciation extinction events. So these mass extinctions caused the emergence of body plans and new species—brought on by epigenetic mechanisms.

Chapter 9 discusses good and bad times in human history—and the epigenetic changes that may have occurred. Ward (2018: 149) discusses the Toba eruption and that “some small group of survivors underwent a behavioral change that became heritable, producing cultural change that is difficult to overstate.” Environmental change leads to behavioral change which eventually leads to change in morphology, as Lamarck said, and mass extinction events are the perfect way to show what Lamarck was saying.

In chapter 10 Ward discusses epigenetics and violence, the star of the chapter being MAOA. Take this example from Ward (2018: 167-168):

Causing violent death or escaping violent death or simply being subjected to intense violence causes significant flooding of the body with a whole pharmacological medicine chest of proteins, and in so doing changes the chemical state of virtually every cell. The produces epigenetic change(s) that can, depending on the individual, create a newly heritable state that is passed on to the offspring. The epigenetic change caused by the fight-or-flight response may cause progeny to be more susceptible to causing violence.

Ward then discsses MAOA (pg 168-170), though read my thoughts on the matter. (He discusses the role of epigenetics in the “turning on” of the gene. Child abuse has been shown to cause epigenetic changes in the brain (Zannas et al, 2015). (It’s notable that Ward—rightly—in this chapter dispenses with the nature vs. nurture argument.)

In Chapter 11, Ward discusses food and famine changing our DNA. He cites the most popular example, that of the studies done on survivors who bore children during or after the famine. (I have discussed this at length.) In September of 1944, the Dutch ordered a nation-wide railroad strike. The Germans then restricted food and medical access to the country causing the deaths of some 20,000 people and harming millions more. So those who were in the womb during the famine had higher rates of disorders such as obesity, anorexia, obesity, and cardiovascular incidences.

However, one study showed that if one’s father had little access to food during the slow growth period, then cardiovascular disease mortality was low. But diabetes mortality was high when the paternal grandfather was exposed to excess food. Further, when SES factors were controlled for, the difference in lifespan was 32 years, which was dependent on whether or not the grandfather was exposed to an overabundance of food or lack of abundance of food just before puberty.

Nutrition can alter the epigenome (Zhang and Kutateladze, 2018), since it can alter the epigenome and the epigenome is heritable, then these changes can be passed on to future generations too.

Ward then discusses the microbiome and epigenetics (read my article for a primer on the microbiome, what it does, and racial differences in it). The microbiome has been called “the second genome” (Grice and Segre, 2012), and so, any changes to the “second genome” can also be passed down to subsequent generations.

In Chapter 12, Ward discusses epigenetics and pandemics. Seeing people die from horrible diseases of course has horrible effects on people. Yes, there were evolutionary implications from these pandemics in that the gene pool was decreased—but what of the effects on the survivors? Methylation impacts behavior and behavior impacts methylation (Lerner and Overton, 2017), and so, differing behaviors after such atrocities can be tagged on the epigenome.

Ward then takes the discussion on pandemics and death and shifts to religion. Imagine seeing your children die, would you not want to believe that there was a better place for them after death to—somewhat—quell your sorrow over their loss? Of course, having an epiphany about something (anything, not just religon) can change how you view life. Ward also discusses a study where atheists had different brain regions activated even while no stimulation was presented. (I don’t like brain imaging studies, see William Uttal’s books and papers.) Ward also discusses the VMAT2 gene, which “controls” mood through the production of the VMAT protein, elevating hormones such as dopamine and serotonin (similar to taking numerous illegal drugs).

Then in Chapter 13 he discusses chemicals and toxins and how they relate to epigenetic processes. These kinds of chemicals and toxins are linked with changes in DNA methylation, miroRNAs, and histone modifications (Hou et al, 2012). (Also see Tiffon, 2018 for more on chemicals and how they affect the epigenome.)

Finally, in Chapter 14 Ward discusses the future of evolution in a world with CRISPR-CAS9. He discusses many ways in which the technology can be useful to us. He discusses one study in which Chinese scientists knocked out the myostatin gene in 65 dog embryos. Twenty-seven of the dogs were born and only two—a male and a female—had both copies of the myostatin gene disrupted. This is just like when researchers made “double-muscle” cattle. See my article ‘Double-Muscled’ Humans?

He then discusses the possibility of “supersoldiers” and if we can engineer humans to be emotionless killing machines. Imagine being able to engineer humans that had no sympathy, no empathy, that looked just like you and I. CRISPR is a tool that uses epigenetic processes and, thus, we can say that CRISPR is a man-made Lamarckian mechanism of genetic change (mimicking lateral gene transfer).

Now, let’s quickly discuss Coyne’s review before I give my thoughts on the book. He criticizes Ward’s article linked above (Coyne admits he did not read the book), stating that his claim that the two nautiluses discussed above being the same species with the same genome and epigenetic forces leading to differences in morphology (phenotype). Take Coyne’s critique of Vandepas, et al, 2016—that they only sequenced two mitochondrial genes. Combosch et al (2017; of which Ward was a coauthor) write (my emphasis):

Moreover, previous molecular phylogenetic studies indicate major problems with the conchiological species boundaries and concluded that Nautilus represents three geographically distinct clades with poorly circumscribed species (Bonacum et al, 2011; Ward et al, 2016). This is been reiterated in a more recent study (Vandepas et al, 2016), which concluded that N. pompilius is a morphologically variable species and most other species may not be valid. However, these studies were predominantly or exclusively based on mitochondrial DNA (mtDNA), an informative but often misleading marker for phylogenetic inference (e.g., Stöger & Schrödl 2013) which cannot reliably confirm and/or resolve the genetic composition of putative hybrid specimens (Wray et al, 1995).

Looks like Coyne did not look hard enough for more studies on the matter. In any case, it’s not just Ward that makes this argument—many other researchers do (see e.g., Tajika et al, 2018). So, if there is no genetic difference between these two (so-called) species, and they have morphological differences, then the possibility that seems likely is that the differences in morphology are environmentally-driven.

Lastly, Coyne was critical of Ward’s thoughts on the heritability of histone modification, DNA methylation, etc. It seems that Coyne has not read the work of philosopher Jan Baedke (see his Google Scholar page), specifically his book Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics along with the work of sociologist Maurizio Meloni (see his Google Scholar page), specifically his book Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics. If he did, Coyne would then see that his rebuttal to Ward makes no sense as Baedke discusses epigenetics from an evolutionary perspective and Meloni discusses epigenetics through a social, human perspective and what can—and does—occur in regard to epigenetic processes in humans.

Coyne did discuss Noble’s views on epigenetics and evolution—and Noble responded in one of his talks. However, it seems like Coyne is not aware of the work of Baedke and Meloni—I wonder what he’d say about their work? Anything that attacks the neo-Darwinian Modern Synthesis gets under Coyne’s skin—almost as if it is a religion for him.

Did I like the book? I thought it was good. Out of 5 stars, I give it 3. He got some things wrong, For instance, I asked Shea Robinson, author of Epigenetics and Public Policy: The Tangled Web of Science and Politics about the beginning of the book and he directed me to two articles on his website: Lamarck’s Actual Lamarckism (or How Contemporary Epigenetics is not Lamarckian) and The Unfortunate Legacy of Jean-Baptiste Lamarck. The beginning of the book is rocky, the middle is good (discussing the Cambrian explosion) and the end is alright. The strength of the book is how Ward discusses the processes that epigenetics occurs by and how epigenetic processes can occur—and help drive—evolutionary change, just as Jablonka and Lamb (1995, 2005) argue, along with Baedke (2018). The book is a great read, if only for the history of epigenetics (which Robinson (2018) goes into more depth, as does Baedke (2018) and Meloni (2019)).

Lamarck’s Revenge is a welcome addition to the slew of books and articles that go against the Modern Synthesis and should be required reading for those interested in the history of biology and evolution.

Evolutionary “Progress”: Gould’s Full House Argument

1600 words

Wind back the tape of life to the origin of modern multicellular animals in the Cambrian explosion, let the tape play again from this identical starting point, and the replay will populate the earth (and generate a right tail of life) with a radically different set of creatures. The chance that this alternative set will contain anything remotely like a human being must be effectively nil, while the probability of any kind of creature endowed with self‐consciousness must also be extremely small. (Gould, 1996. Full House)

Wind back the tape of life to the early days of the Burgess Shale; let it play again from an identical starting point, and the chance becomes vanishingly small that anything like human intelligence would grace the replay. (Gould, 1987. Wonderful Life)

Wind back the clock to Cambrian times, half a billion years ago, when mammals first exploded into the fossil record, and let it play forwards again. Would that parallel be similar to our own? Perhaps the hills would be crawling with giant terrestrial octopuses. (Lane, 2015: 21. The Vital Question)

I first read Full House (Gould, 1996) about two years ago. I never was one to believe in evolutionary “progress”, though. As I read through the book, seeing how Gould weaved his love for baseball into an argument against evolutionary “progress” enthralled me. I love baseball, I love evolution, so this was the perfect book for me (indeed, one of my favorite books I have read in my life—and I have read a lot of them). The basic argument goes like this: There are more bacteria on earth than other animals deemed more “advanced”; if evolutionary “progress”—as popularly believed— were true, then there would be more “advanced” mammals than bacteria; there are more bacteria (“simpler: animals) than mammals (more “advanced” animals); therefore evolutionary “progress” is an illusion.

Evolutionary “progress” is entrenched in our society, as can be seen from popular accounts of human evolution (see picture below):

Human-evolution-silhouettes-01-700x495-1x3ej6l

This is the type of “progress” that permeates the minds of the public at large.

Some may look at the diversity of life and conclude that there is a type of “progress” to evolution. However, Gould dispatches with this type of assertion with his drunkard argument. Imagine a drunkard leaving the bar. There is the bar wall (the left wall of complexity) and the gutter (the right wall of complexity). As the drunkard walks, he may stumble in between the left wall and the gutter, but he will always end up in the gutter every time.

Gould explains then explains his reasoning for using this type of argument:

I bring up this old example to illustrate but one salient point: In a system of linear motion structurally constrained by a wall at one end, random movement, with no preferred directionality whatever, will inevitably propel the average position away from a starting point at the wall. The drunkard falls into the gutter every time, but his motion includes no trend whatever toward this form of perdition. Similarly, some average or extreme measure of life might move in a particular direction even if no evolutionary advantage, and no inherent trend, favor that pathway (Gould, 1996: 151).

The claim that there is a type of “progress” to evolution is only due to the fact—in my opinion—that humans exist and are the most “advanced” species on earth.

It seems that JP Rushton did not read this critique of evolutionary “progress”, since not even a year after Gould published Full House, Rushton published anew edition of Race, Evolution, and Behavior (Rushton, 1997) where Rushton argues (on pages 292-294) that there is, indeed, “progress” to evolution. He cites Aristotle, Darwin (1859), Wilson (1975) Russell (1983, 1989; read my critique of Russel’s theory), and Bonner.

To be brief:

The Great Chain of Being (which Rushton’s r/K selection theory attempts to revive) is not valid; Wilson’s idea of “biological progression” is taken care of by Gould’s drunkard argument; Bonner asks why there has been evolution from simple to advanced, and this, too, is taken care of by Gould’s drunkard argument, and finally Dale Russel’s argument about the troodon (I will expand on this below).

Rushton claims that Russell, in his 1989 book Odysseys in Time: Dinosaurs of North America (which I bought specifically to get more info on Russel’s thoughts on the matter and to get more information for an article on it) that “if [dinosaurs] had not gone extinct, dinosaurs would have progressed to a large-brained, bipedal descendent” (Rushton, 1997: 294). Either Rushton only glanced at Russel’s writings or he’s being inherently dishonest: Russel claimed that had the dinosaurs not gone extinct, one dinosaur—the troodon—would have evolved into a bipedal, human-like being. Russel made these claims since the troodon had EQs about 6 times the size of the average dinosaur and they ran on two legs and had use of their ‘hands.’ So, due to this, Russel argues that had the dinosaurs not gone extinct, the troodons could possibly have been human-like. However, there are two huge problems for this hypothesis.

In the book Up From Dragons, Skoyles and Sagan (2002: 12) write:

But cold-bloodedness is a dead-end for the great story of this book—the evolution of intelligence. Certainly reptiles could evolve huge sizes, as they did over vast sweeps of Earth as dinosaurs. But they never could have evolved our quick-witted and smart brains. Being tied to the sun restricts their behavior: Instead of being free and active, searching and understanding the world, they spend too much time avoiding getting too hot or too cold.

So, since dinosaurs are cold-blooded and being tied to the sun restricts their behavior, if they would have survived the K-T extinction event, then it is highly implausible that they would have grown brains our size.

Furthermore, Hopson (1977: 444) writes:

I would argue, as does Feduccia (44), that the mammalian/avian levels of activity claimed by Bakker for dinosaurs should be correlated with a great increase in motor and sensory control and this should be reflected in increased brain size. Such an increase is not indicated by most dinosaur endocasts.

Gould even writes in Wonderful Life:

If mammals had arisen late and helped to drive dinosaurs to their doom, then we could legitimately propose a scenario of expected progress. But dinosaurs remained dominant and probably became extinct only as a quirky result of the most unpredictable of all events—a mass dying triggered by extraterrestrial impact. If dinosaurs had not died in this event, they would probably still dominate the large-bodied vertebrates, as they had for so long with such conspicuous success, and mammals would still be small creatures in the interstices of their world. This situation prevailed for one hundred million years, why not sixty million more? Since dinosaurs were not moving towards markedly larger brains, and since such a prospect may lay outside the capability of reptilian design (Jerison, 1973; Hopson, 1977), we must assume that consciousness would not have evolved on our planet if a cosmic catastrophe had not claimed the dinosaurs as victims. In an entirely literal sense, we owe our existence, as large reasoning mammals, to our lucky stars. (Gould, 1989: 318)

I really don’t think it’s possible that brains our size would have evolved had the dinosaurs not gone extinct, and the data we have about dinosaurs strongly points to that assertion.

Staying on the topic of progression and brain size, there is one more thing I want to note. Deacon (1990a) argues that fallacies exist in the assertion that brain size progressed throughout evolutionary history. One of Deacon’s fallacies is the “evolutionary progression fallacy.” The concept of “progress” finds refuge “implicit expression in the analysis of brain-size differences and presumed grade shifts in allometric brain/body size trends, in theories of comparative intelligence, in claims about the relative proportions of presumed advanced vs. primitive brain areas, in estimates of neural complexity, including the multiplication and differentiation of brain areas, and in the assessment of other species with respect to humans, as the presumed most advanced exemplar” (Deacon, 1990a: 195).

This, in my opinion, is the last refuge for progressionists: looking at the apparent rise of brain size in evolutionary history and saying “Aha! There it is—progress!” So, the so-called progress in brain size evolution is only due to allometric processes, there is no true “progress” in brain size, no unbiased allometric baseline exists, therefore these types of claims from progressionists fail. Lastly, Deacon (1990b) argues that so-called brain size progress vanishes when functional specialization is taken into account.

Therefore it is unlikely that dinosaurs would have evolved brains our size.

In sum, there are many ways that progressionists attempt to show that there is “progress” in evolution. However, they all fail since Gould’s argument is always waiting to rear its head. Yes, some organisms have evolved greater complexity—i.e., moved toward the right wall—though this is not evidence for “progress.” Many—if not all—accounts of “progress” fail. There is no “progress” in brain size evolution; there would not be human-like dinosaurs had the dinosaurs not gone extinct in the K-T extinction event. We live on a planet of bacteria, and since we live on a planet of bacteria—that is, since bacteria are the most numerous type of organism on earth, evolutionary progress cannot be true.

Complexity—getting to the right wall—is an inevitability, just as it is an inevitability that the drunkard would eventually stumble to the gutter. But this does not mean that there is “progress” to evolution.

The argument in Gould’s Full House can be simply stated like this:

P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.

Vegans/Vegetarians vs. Carnivores and the Neanderthal Diet

2050 words

The vegan/vegetarian-carnivore debate is one that is a false dichotomy. Of course, the middle ground is eating both plants and animals. I, personally, eat more meat (as I eat a high protein diet) than plants, but the plants are good for a palate-switch-up and getting other nutrients in my diet. In any case, on Twitter, I see that there is a debate between “carnivores” and “vegans/vegetarians” on which diet is healthier. I think the “carnivore” diet is healthier, though there is no evolutionary basis for the claims that they espouse. (Because we did evolve from plant-eaters.) In this article, I will discuss the best argument for ethical vegetarianism and the evolutionary basis for meat-eating.

Veganism/Vegetarianism

The ethical vegetarian argument is simple: Humans and non-human animals deserve the same moral consideration. Since they deserve the same moral consideration and we would not house humans for food, it then follows that we should not house non-human animals for food. The best argument for ethical vegetarianism comes from Peter Singer from Unsanctifying Animal Life. Singer’s argument also can be extended to using non-human animals for entertainment, research, and companionship.

Any being that can suffer has an interest in avoiding suffering. So the equal consideration of interests principle (Guidi, 2008) asserts that the ability to suffer applies to both human and non-human animals.

Here is Singer’s argument, from Just the Arguments: 100 of the Most Important Arguments in Western Philosophy (pg. 277-278):

P1. If a being can suffer, then that being’s interests merit moral consideration.

P2. If a being cannot suffer, then that beings interests do not merit moral consideration.

C1. If a being’s interests merit moral consideration, then that being can suffer (transposition, P2).

C2. A being’s interests merit moral consideration if and only if that being can suffer (material equivalence, P1, C1).

P3. The same interests merit the same moral consideration, regardless of what kind of being is the interest-bearer (equal consideration of interests principle).

P4. If one causes a being to suffer without adequate justification, then one violates that being’s interests.

P5. If one violates a being’s interests, then one does what is morally wrong.

C3. If one causes a being to suffer without adequate justification, then one does what is morally wrong (hypothetical syllogism, P4, P5).

P6. If P3, then if one kills, confines, or causes nonhuman animals to experience pain in order to use them as food, then one causes them to suffer without adequate justification.

P7. If one eats meat, then one participates in killin, confining, and causing nonhuman animals to experience pain in order to use them as food.

C4. If one eats mea, then one causes nonhuman animals to suffer without adequate justification (hypothetical syllogism, P6, P7).

C5. If one eats meat, the one does what is morally wrong (hypothetical syllogism, C3, C4).

This argument is pretty strong, indeed it is sound. However, I personally will never eat a vegetarian/vegan diet because I love eating meat too much. (Steak, turkey, chicken.) I will do what is morally wrong because I love the taste of meat.

In an evolutionary context, the animals we evolved from were plant-eaters. The amount of meat in our diets grew as we diverged from our non-human ancestors; we added meat through the ages as our tool-kit became more complex. Since the animals we evolved from were plant-eaters and we added meat as time went on, then, clearly, we were not “one or the other” in regard to diet—our diet constantly changed as we migrated into new biomes.

So although Singer’s argument is sound, I will never become a vegan/vegetarian. Fatty meat tastes too good.

Nathan Cofnas (2018) argues that “we cannot say decisively that vegetarianism or veganism is safe for children.” This is because even if the vitamins and minerals not gotten through the diet are supplemented, the bioavailability of the consumed nutrients are lower (Pressman, Clement, and Hayes, 2017). Furthermore, pregnant women should not eat a vegan/vegetarian diet since vegetarian diets can lead to B12 and iron deficiency along with low birth weight and vegan diets can lead to DHZ, zinc, and iron deficiencies along with a higher risk of pre-eclampsia and inadequate fetal brain development (Danielewicz et al, 2017). (See also Tan, Zhao, and Wang, 2019.)

Carnivory

Meat was important to our evolution, this cannot be denied. However, prominent “carnivores” take this fact and push it further than it goes. Yes, there is data that meat-eating allowed our brains to grow bigger, trading-off with body size. Fonseca-Azevedo and Herculano-Houzel (2012) showed that metabolic limitations resulting from hours of feeding and low caloric yield explain the body/brain size in great apes. Plant foods are low in kcal; great apes have large bodies and so, need to eat a lot of plants. They spend about 10 to 11 hours per day feeding. On the other hand, our brains started increasing in size with the appearance of erectus.

If erectus ate nothing but raw foods, he would have had to eat more than 8 hours per day while hominids with neurons around our level (about 86 billion; Herculano-Houzel, 2009). Thus, due to the extreme difficulty of attaining the amount of kcal needed to power the brains with more neurons, it is very unlikely that erectus would have been able to survive on only plant foods while eating 8+ hours per day. Indeed, with the archaeological evidence we have about erectus, it is patently ridiculous to claim that erectus did eat for that long. Great apes mostly graze all day. Since they graze all day—indeed, they need to as the caloric availability of raw foods is lower than in cooked foods (even cooked plant foods would have a higher bioavailability of nutrients)—then to afford their large bodies they need to basically do nothing but eat all day.

It makes no sense for erectus—and our immediate Homo sapiens ancestors—to eat nothing but raw plant foods for what amounts to more than a work day in the modern world. If this were the case, where would they have found the time to do everything else that we have learned about them in the archaeological record?

There is genetic evidence for human adaptation to a cooked diet (Carmody et al, 2016). Cooking food denatures the protein in it, making it easier to digest. Denaturation is the alteration of the protein shape of whatever is being cooked. Take the same kind of food. That food will have different nutrient bioavailability depending on whether or not it is cooked. This difference, Herculano-Houzel (2016) and Wrangham (2009) argue is what drove the evolution of our genus and our big brains.

Just because meat-eating and cooking was what drove the evolution of our big brains—or even only allowed our brains to grow bigger past a certain point—does not mean that we are “carnivores”; though it does throw a wrench into the idea that we—as in our species Homo sapiens—were strictly plant-eaters. Our ancestors ate a wide-range of foods depending on the biome they migrated to.

The fact that our brain takes up around 20 percent of our TDEE while representing only 2 percent of our overall body mass, the reason being our 86 billion neurons (Herculano-Houzel, 2011). So, clearly, as our brains grew bigger and acquired more neurons, there had to have been a way for our ancestors to acquire the energy need to power their brains and neurons and, as Fonseca-Azevedo and Herculano-Houzel (2012) show, it was not possible on only a plant diet. Eating and cooking meat was the impetus for brain growth and keeping the size of our brains.

Take this thought experiment. An asteroid smashes into the earth. A huge dust cloud blocks out the sun. So the asteroid would have been a cause of lowering food production. This halting of food production—high-quality foods—persisted for hundreds of years. What would happen to our bodies and brains? They would, of course, shrink depending on how much and what we eat. Food scarcity and availability, of course, do influence the brain and body size of primates (Montgomery et al, 2010), and humans would be no different. So, in this scenario I have concocted, in such an event, we would shrink, in both brain and body size. I would imagine in such a scenario that high-quality foods would disappear or become extremely hard to come by. This would further buttress the hypothesis that a shift to higher-quality energy is how and why our large brains evolved.

Neanderthal Diet

A new analysis of the tooth of a Neanderthal apparently establishes that they were mostly carnivorous, living mostly on horse and reindeer meat (Jaouen et al, 2019). Neanderthals did indeed have a high-meat diet in northerly latitudes during the cold season. Neanderthals in Southern Europe—especially during the warmer seasons—however, ate a mixture of plants and animals (Fiorenza et al, 2008). Further, there was a considerable plant component to the diet of Neanderthals (Perez-Perez et al, 2003) (with the existence of plant-rich diets for Neanderthals being seen mostly in the Near East; Henry, Brooks, and Piperno, 2011) while the diet of both Neanderthals and Homo sapiens varied due to climatic fluctuations (El Zataari et al, 2016). From what we know about modern human biochemistry and digestion, we can further make the claim that Neanderthals ate a good amount of plants.

Ulijaszek, Mann, and Elton (2013: 96) write:

‘Absence of evidence’ does not equate to ‘evidence of absence,’ and the meat-eating signals from numerous types of data probably swamp the plant-eating signlas for Neanderthals. Their dietary variability across space and time is consistent with the pattern observed in the hominin clade as a whole, and illustrates hominin dietary adaptatbility. It also mirrors trends observed in modern foragers, whereby those populations that live in less productive environments have a greater (albeit generally not exclusive) dependance on meat. Differences in Neanderthal and modern human diet may have resulted from exploitation of different environments: within Europe and Asia, it has been argued that modern humans exploited marginal areas, such as steppe environments, whereas Neanderthals may have preferred more mosaic, Mediterranean-type habitats.

Quite clearly, one cannot point to any one study to support an (ideologically driven) belief that our genus or Neanderthals were “strictly carnivore”, as there was great variability in the Neanderthal diet, as I have shown.

Conclusion

Singer’s argument for ethical vegetarianism is sound; I personally can find no fault in it (if anyone can, leave a comment and we can discuss it, I will take Singer’s side). Although I can find no fault in the argument, I would never become a vegan/vegetarian as I love meat too much. There is evidence that vegan/vegetarian diets are not good for growing children and pregnant mothers, and although the same can be said for any type of diet that leads to nutrient deficiencies, the risk is much higher in these types of plant-based diets.

The evidence that we were meat-eaters in our evolutionary history is there, but we evolved as eclectic feeders. There was great variability in the Neanderthal diet depending on where they lived, and so the claim that they were “full-on carnivore” is false. The literature attests to great dietary flexibility and variability in both Homo sapiens and Neanderthals, so the claim that they ate meat and only meat is false.

My conclusion in my look into our diet over evolutionary time was:

It is clear that both claims from vegans/vegetarians and carnivores are false: there is no one “human diet” that we “should” be eating. Individual variation in different physiologic processes implies that there is no one “human diet”, no matter what type of food is being pushed as “what we should be” eating. Humans are eclectic feeders; we will eat anything since “Humans show remarkable dietary flexibility and adaptability“. Furthermore, we also “have a relatively unspecialized gut, with a colon that is shorter relative to overall size than in other apes; this is often attributed to the greater reliance on faunivory in humans (Chivers and Langer 1994)” (Ulijaszek, Mann, and Elton, 2013: 58). Our dietary eclectism can be traced back to our Australopithecine ancestors. The claim that we are either “vegetarian/vegan or carnivore” throughout our evolution is false.

There is no evidence for both of these claims from both of these extreme camps; humans are eclectic feeders. We are omnivorous, not vegan/vegetarian or carnivores. Although we did evolve from plant-eating primates and then added meat into our diets over time, there is no evidence for the claim that we ate only meat. Our dietary flexibility attests to that.

Gene-Selectionism vs. Developmental Systems Theory

2300 words

Two dominant theories exist in regard to development, the “gene’s eye view—gene selectionism (GS)—and the developmental view—developmental systems theory (DST). GS proposes that there are two fundamental processes in regard to evolution: replication and interaction. Replicators (the term was coined by Dawkins) are anything that is copied into the next generation whereas interactors (vehicles) are things that only exist to ensure the replicators’ survival. Thus, Dawkins (1976) proposes a distinction between the “vehicle” (organism) and its “riders/replicators” (the genes).

Gene selectionism

Gene selectionists propose a simple hypothesis: evolution through the differential survival of genes, its main premise being that the “gene” is “the ultimate, fundamental unit of natural selection.” Dusek (1999: 156) writes that “Gene selection claims that genes, not organisms, groups of organisms or species, are selected. The gene is considered to be the unit of selection.” The view of gene selectionists is best—and most popularly put—by Richard Dawkins’ seminal book The Selfish Gene (1976), in which he posits that genes “compete” with each other, and that our “selfish actions” are the result of our genes attempting to replicate to the next generation, relegating our bodies to disposable “vehicles” that only house the “replicators” (or “drivers).

Though, just because one is a gene selectionist does not necessarily mean that they are a genetic determinist (both views will be argued below). Gene selectionists are comitted to the view that genes make a distinctive contribution toward building interactors. Dawkins (1982) claims that genetic determinism is not a problem in regard to gene selectionism. Replicators (genes) have a special status to gene selectionists. Gene selectionists argue that adaptive evolution only occurs through cumulative selection, while only the replicators persist through the generations. Gene selectionists do not see organisms as replicators since genes—and not organisms—are what is replicated according to the view.

The gene selectionist view (Dawkins’ 1976 view) can also be said to apply what Okasha (2018) terms “agential thinking”. “Agential thinking” is “treating an evolved organism as if it were an agent pursuing a goal, such as survival or reproduction, and treating its phenotypic traits, including its behaviour, as strategies for achieving that goal, or furthering its biological interests” (Okasha, 2018: 12). Dawkins—and other gene selectionists—treat genes as if they have agency, speaking of “intra-genomic conflict”, as if genes are competing with each other (sure, it’s “just a metaphor”, see below).

Okasha (2018: 71) writes:

To see how this distinction relates to agential thinking, note that every gene is necessarily playing a zero-sum game against other alleles at the same locus: it can only spread in the population if they decline. Therefore every gene, including outlaws, can be thought of as ‘trying’ to outcompete alleles, or having this as its ultimate goal.

Selfish genes also have intermediate goals, which are to maximize fitness, which is done through expression in the organismic phenotype.

Thus, according to Okasha (2018: 73), “… selfish genetic elements have phenotypic effects which can be regarded as adaptations, but only if we apply the notions of agent, benefit, and goal to genes themselves”, though “… only in an evolutionary context [does] it [make] sense to treat genes as agent-like and credit them with goals and interests.” It does not “make sense to treat genes as even “agent-like and credit them with goals and interests since they can only be attributed to humans.

Other genes have as their intermediate goal to enhance the fitness of their host organism’s relatives, by causing altruistic behaviour [genes can’t cause altruistic behavior; it is an action]. However, a small handful of genes have a different intermediate goal, namely to increase their own transmission in their host organism’s gametes, for example, by biasing segregation in their favour, or distorting the sex-ratio, or transposing to new sites in the genome. These are outlaws, or selfish genetic elements.If oulaws are absent or are effectively suppressed, then the genes within a single organism have a common (intermediate) goal, so will cooperate: each gene can onluy benefit by itself by benefiting the whole organism. Agential thinking then can be applied to the organism itself. The organism’s goal—maximizing its fitness—then equates to the intermediate goal of each of the genes within it. (Okasha, 2018: 72)

Attributing agential thinking to anything other than humans is erroneous, since genes are not “selfish.”

The selfish gene is one of the main theories that define the neo-Darwinian paradigm and it is flat out wrong. Genes are not ultimate causes, as the crafters of the neo-Darwinian Modern Synthesis (MS) propose, genes are resources in a dynamic system and can thusly only be seen as causes in a passive, not active, sense (Noble, 2011).

Developmental systems

The alternative to the gene-centric view of evolution is that of developmental systems theory (DST), first proposed by Oyama (1985).

The argument for DST is simple:

(1) Organisms obviously inherit more than DNA from their parents. Since organisms can behave in ways that alter the environment, environments are also passed onto offspring. Thus, it can be said that genes are not the only things inherited, but a whole developmental matrix is.

(2) Genes, according to the orthodox view of the MS, interact with many other factors for development to occur, and so genes are not the only thing that help ‘build’ the organism. Genes can still play some “privileged” role in development, in that they “control”, “direct” or “organize” everything else, but this is up to gene-selectionists to prove. (See Noble, 2012.)

(3) The common claim that genes contain “information” (that is, context-independent information) is untenable, since every reconstruction of genes contain development about information applies directly to all other developmental outcomes. Genes cannot be singled out as privileged causes in development.

(4) Other attempts—such as genes are copied more “directly—are mistaken, since they draw a distinction between development and other factors but fail.

(5) Genes, then, cannot be privileged in development, and are no different than any other developmental factor. Genes, in fact, are just passive templates for protein construction, waiting to be used by the system in a context-dependent fashion (see Moore, 2002; Schneider, 2007). The entire developmental system reconstructs itself “through numerous independent causal pathways” (Sterelny and Griffiths, 1999: 109).

DNA is not the only thing inherited, and the so-called “famed immortality of DNA is actually a property of cells [since] [o]nly cells have the machinery to correct frequent faults that occur in DNA replication.” The thing about replication, though, is that “DNA and the cell must replicate together” (Noble, 2017: 238). A whole slew of developmental tools are inherited and that is what constructs the organism; organisms are, quite obviously, constructed not by genes alone.

Developmental systems, as described by Oyama (1985: 49) do not “have a final form, encoded before its starting point and realized at maturity. It has, if one focuses finely enough, as many forms as time has segments.Oyama (1985: 61) further writes that “The function of the gene or any other influence can be understood only in relation to the system in which they are involved. The biological relevance or any influence, and therefore the very “information” it conveys, is jointly determined, frequently in a statistically interactive, not additive, manner, by that influence and the system state it influences.

DNA is, of course, important. For without it, there would be nothing for the cell to read (recall how the genome is an organ of the cell) and so no development would occur. DNA is only “information” about an organism only in the process of cellular functioning.

The simple fact of the matter is this: the development of organs and tissues are not directly “controlled” by genes, but by the exchange signals of the cells. “Details notwithstanding, what is important to note is that whatever kinds of signals it sends out depends on the kind of signals it receives from its immediate environment. Therefore, neighboring cells are interdependent, and its local interactions among cells that drive the developmental processes” (Kampourakis, 2017: 173).

The fact of the matter is that whether or not a trait is realized depends on the developmental processes (and the physiologic system itself) and the environment. Kampourakis, just like Noble (2006, 2012, 2017) pushes a holistic view of development and the system. Kampourakis (2017: 184) writes:

What genetics research consistently shows is that biological phenomena should be approached holistically. at various levels. For example, as genes are expressed and produce proteins, and some of these proteins regulate or affect gene expression, there is absolutely no reason to privilege genes over proteins. This is why it is important to consider developmental processes in order to undertand how characters and disease arise. Genes cannot be considered alone but only in the broader context (cellular, organismal, environmental) in which they exist. And both characters and disease in fact develop; they are not just produced. Therefore, reductionism, the idea that genes provide the ultimate explanation for characters and disease, is also wrong. In order to understand such phenomena, we need to consider influence at various levels of organization, both bottom-up and top-down. This is why current research has adopted a systems biology approach (see Noble, 2006; Voit, 2016 for accessible introductions).

All this shows that developmental processes and interactions play a major role in shaping characters. Organisms can respond to changing environments through changes in their development and eventually their phenotypes. Most interestingly, plastic responses of this kind can become stable and inherited by their offspring. Therefore, genes do not predetermine phenotypes; genes are implicated in the development of phenotypes only through their products, which depends on what else is going on within and outside cells (Jablonka, 2013). It is therefore necessary to replacr the common representation of gene function presented in Figure 9.6a, which we usually find in the public sphere, with others that consider development, such as the one in figure 9.6b. Genes do not determine characters, but they are implicated in their development. Genes are resources that provide cells with a generative plan about the development of the organism, and have a major role in this process through their products. This plan is the resouce for the production of robust developmental outcomes that are at the same time plastic enough to accomodate changes stemming from environmental signals.

genedevelop1

Figure 9.6 (a) The common representation of gene function: a single gene determines a single phenotype. It should be clear by what has been present in the book so far that is not accurate. (b) A more accurate representation of gene function that takes development and environment into account. In this case, a phenotype is propduced in a particular environment by developmental processes in which genes are implicated. In a different environment the same genes might contribute tothe development of a different phenotype. Note the “black box” of development.

[Kampourakis also writes on page 188, note 3]

In the original analogy, Wolpert (2011, p. 11) actually uses the term “program.” However, I consider the term “plan” as more accurate and thus more appropriate. In my view, the term “program” impies instructions and their implimentation, whereas the term “plan” is about instructions only. The notion of a genetic program can be very misleading because it implies that, if it were technically feasible, it would be possible to compute an organism by reading the DNA sequence alone (see Lewontin, 2000, pp. 17-18).

Kampourakis is obviously speaking of a “plan” in a context-dependent manner since that is the only way that genes/DNA contain “information” (Moore, 2002; Schneider, 2007). The whole point is that genes, to use Noble’s terminology, are “slaves” to the system, since they are used by and for the (physiological) system. Developmental systems theory is a “wholeheartedly epigenetic approach to development, inheritance and evolution” (Hochman and Griffiths, 2015).

This point is driven home by Richardson (2017:111):

And how did genes eventually become established? Probably not at all as the original recipes, designers, and controllers of life. Instead they arose as templates for molecular components used repeatedly in the life of the cell and the organism: a kind of facility for just-in-time production of parts needed on a recurring basis. Over time, of course, the role of these parts themselves evolved to become key players in the metabolism of the call—but as part of a team, not the boss.

[…]

It is not surprising, then, that we find that variation in form and function has, for most traits, only a tenuous relationship with variation in genes.

[And also writes on page 133]:

There is no direct command line between environments and genes or between genes and phenotypes. Predictions and decisions about form and variation are made through a highly evolved dynamical system. That is why ostensibly the same environment, such as hormonal signal, can initiate a variaety of responses like growth, cell division, differentiation, and migration, depending on deeper context. This reflects more than fixes responses from fixed information in genes, something fatally overlooked in the nature-nurture debate

(Also read Richardson’s article So what is a gene?)

Conclusion

The gene-selectionist point-of-view entails too many (false) assumptions. The DST point of view, on the other hand, does not fall prey to the pitfalls of the gene-selectionist POV; Developmental systems theorists look at the gene, not as the ultimate causes of development—and, along with that, only changes in gene frequency driving evolutionary change—but only as products to be used by and for the system. Genes can only be looked at in terms of development, and in no other way (Kamporuakis, 2017; Noble, 2017). Thus, the gene-selectionists are wrong; the main tenet of the neo-Darwinian Modern Synthesis, gene-selectionism—the selfish gene—has been refuted (Jablonka and Lamb, 2005; Noble, 2006, 2011). The main tenets of the neo-Darwinian Modern Synthesis have been refuted, and so it is now time to replace the Modern Synthesis with a new view of evolution: one that includes the role of genes and development and the role of epigenetics on the developmental system. The gene-selectionist view champions an untenable view of the gene: that the gene is priviliged above any other developmental variables, but Noble and Kampourakis show that this is not the case, since DNA is inherited with the cell; the cell is what is “immortal” to use the language of Dawkins—not DNA itself.

A priori, there is no privileged level of causation, and this includes the gene, which so many place at the top of the hierarchy (Noble, 2012).

What Is the “Human Diet”?

3000 words

Is there one (or, one with slight modifications) diet that all humans should be eating? I’m skeptical of such claims. Though both vegans (one who does not eat or use animal products) and carnivores (one who eats only animal products), in my opinion, have some warped views on diet and human evolution. Both are extreme views; both have wrong ideas about diet throughout our evolution; both get some things right. Though, both are extreme views with little to no support. While it is hard to pinpoint what the “human diet” is, clearly, there were certain things that we ate through our evolutionary niches in our ancestral Africa that we “should” be eating today (in good quantities).

Although it is difficult to reconstruct the diet of early hominids due to lack of specimens (Luca, Perry, and Rienzo, 2014), by studying the eating behavior of our closest evolutionary relatives—the chimpanzees—we can get an idea of what our LCA ate and its eating behavior (Ulijaszek, Mann, and Elton, 2013). Humans have been throughout most every niche we could possibly been in and, therefore, have come across the most common foods in each ecology. If animal A is in ecosystem E with foods X, Y, and Z, then animal A eats foods X, Y, and Z, since animals consume what is in their ecosystem. Knowing this much, the niches our ancestors lived in in the past had to have a mix of both game and plants, therefore that was our diet (in differing amounts, obviously). But it is more complicated than that.

So, knowing this, according to Ulijaszek, Mann, and Elton, (2013: 35)Mammalian comparisons may be more useful than ‘Stone Age’ perspectives, as many of the attributes of hominin diets and the behaviour associated with obtaining them were probably established well before the Pleistocene, the time stone agers were around (Foley 1995; Ulijaszek 2002; Elton 2008a).” Humans eat monocots (various flowering plants with one seed), which is not common our order. The advent of farming was us “expanding our dietary niche”, which began “the widespread adoption of agriculture [which] is an obvious point of transition to a ‘monocot world’” (Ulijaszek, Mann, and Elton, 2013). Although these foodstuffs dominate our diet, there is seasonality in what types of those foods we consume.

So since humans tend to not pick at things to eat, but have discrete meals (it is worth noting that one should have “three square meals a day” is a myth; see Mattson et al, 2014), we need to eat a lot in the times we do eat. Therefore, since we are large-bodied primates and our energy needs are much higher (due to our large brains that consume 20 percent of our daily caloric consumption), we need higher quality energy. The overall quality and energy density of our diets are due to meat-eating—which folivorous/frugivorous primates do not consume. We have a shorter gut tract which is “often attributed to the greater reliance of faunivory in humans“, though “humans are not confined to ‘browse’ vegetation … and make extensive use of grasses and their animal consumers” (Ulijaszek, Mann, and Elton, 2013: 58). Due to this, we show amazing dietary flexibility and adaptability due to our ability to eat a wide range of foodstuffs in most any environment we find ourselves in.

So “It is difficult to pinpoint what the human diet actually is … Nonetheless, humans are frequently described as omnivores” (Ulijaszek, Mann, and Elton, 2013: 59). Omnivores normally feed at two or more trophic levels, though others define it as just consuming plants and animals (Chubaty et al, 2014). Trophic level one is taken up by plants; level two is taken up by herbivores—primary consumers; level three is taken up by predators—who feed on the herbivores; level four or five is taken up by apex predators or carnivores; while the last level is also taken up by detrivores—those who feed on waste. Though, of course, “omnivory” is a continuum and not a category in and of itself. Humans eat primary producers (plants) and primary consumers (herbivores) and some secondary consumers (like fish), “although human omnivory may only be possible because of technological processing” (Ulijaszek, Mann, and Elton, 2013: 59). Other animals described as “omnivorous” eat only foods from one trophic level and only consume food from another level when needed.

Humans—as a species—rely on meat consumption. Fonseca-Azevedo and Herculano-Houzel (2012) showed that the energetic cost of a brain is directly related to the number of neurons in the brain. So, there were metabolic limitations in regard to brain and body size. The number of hours available to feed along with the low caloric yield of plant foods explains why great apes have such large bodies and small brains—which was probably overcome by erectus, who probably started cooking food around 1.5 mya. If we consumed only a diet of raw foods, then it would have taken us around 9 h/day to consume the calories we would need to power our brains—which is just not feasible. So it is unlikely that erectus—who was the first to have the human body plan and therefore the ability to run, which implies he would have needed higher quality energy—would have survived on a diet of raw plant foods since it would take so many hours to consume enough food to power their growing brains.

We can see that we are adapted to eating meat by looking at our intestines. Our small intestines are relatively long, whereas our long intestines are relatively short, which indicates that we became adapted to eating meat. Our “ability to eat significant quantities of meat and fish is a significant departure from the dietary norm of the haplorhine primates, especially for animals in the larger size classes.” Though “Humans share many features of their gut morphology with other primates, particularly the great apes, and have a gut structure that reflects their evolutionary heritage as plant, specifically ripe fruit, eaters” (Ulijaszek, Mann, and Elton, 2013: 63). Chimpanzees are not physiologically adapted to meat eating, which can be seen in the development of hypercholesterolemia along with vascular disease, even when controlled diets in captivity (Ford and Stanford, 2004).

When consuming a lot of protein, though, “rabbit starvation” needs to be kept in mind. Rabbit starvation is a type of malnutrition that arises from eating little to no fat and high amounts of protein. Since protein intake is physiologically demanding (it takes the most energy to process out of the three macros), Ben-Dor et al (2011) suggest a caloric ceiling of about 35 percent of kcal coming from protein. So erectus’ protein ceiling was 3.9 g/bw per day whereas for Homo sapiens it was 4.0 g/bw per day. Ben-Dor et al (2011) show that erectus’ DEE (daily energy expenditure) was about 2704 kcal, with “a maximum long-term plant protein ceiling of 1014 calories“, implying that erectus was, indeed, an omnivore. So, of course, the consumption of protein and raw plants are physiologically limited. Since erectus’ ceiling on protein intake was 947 kcal and his ceiling on raw plant intake was 1014 kcal, then, according to the model proposed by Ben-Dor et al (2011), erectus would have needed to consume about 744 kcal from fat, which is about 27 percent of his overall caloric intake and 44 percent of animal product intake.

Neanderthals would have consumed between 74-85 percent of their daily caloric energy during glacial winters from fat, with the rest coming from protein (Ben-Dor, Gopher, and Barkai, 2016), while consuming between 3,360 to 4,480 kcal per day (Steegman, Cerny, and Holliday, 2002). (See more on Neanderthal diet here.) Neanderthals consumed a large amount of protein, about 292 grams per day (Ben-Dor, Gopher, and Barkai, 2016: 370). Since our close evolutionary cousins (Neanderthals and erectus) ate large amounts of protein and fat, they were well-acclimated, physiologically speaking, to their high-protein diets. Though, their diets were not too high in protein to where rabbit starvation would occur—fat was consumed in sufficient amounts in the animals that Neanderthals hunted and killed, so rabbit starvation was not a problem for them. But since rabbit starvation is a huge problem for our species, “It is therefore unlikely that humans could be true carnivores in the way felids are” (Ulijaszek, Mann, and Elton, 2013: 66).

We consume a diet that is both omnivorous and eclectic, which is determined by our phylogeny through the form of our guts; we have nutritional diversity in our evolutionary history. We needed to colonize new lands and, since animals can only consume what is in their ecosystem, the foods that are edible in said ecosystem will be what is consumed by that animal. Being eclectic feeders made the migration out of Africa possible.

But humans are not true carnivores, contrary to some claims. “Meat-eating has allowed humans to colonize high latitudes and very open landscapes. However, bearing in mind the phylogenetic constraints that prevent humans from being true carnivores, such expansion was probably not accomplished through meat-eating alone. Instead, humans have used their ability to technologically harvest, produce, and consume a very wide range of foods to help exploit all major biomes” (Ulijaszek, Mann, and Elton, 2013: 67).

Humans, though, lack the gut specialization and dentition to process grasses efficiently. This means that our ancestors ate animals that ate these things, and therefore the C4 they consumed elevated the levels in the fossils we discovered. Information like this implies that our ancestors ate across a wide variety of trophic levels and had substantial dietary diversity throughout evolutionary history.

Hominins lack the specialized dentition found in carnivorans (the group of animals that includes the cat and dog families) and other habitual meat and bone eaters, so must have pre-processed at least some of the meat in their diet” (Ulijaszek, Mann, and Elton, 2013: 81). This is where stone tools come into play (Zink and Lieberman, 2016). “Processing” food can be anything from taking out nutrients to changing how the food looks. We can look at “food processing” as a form of pre-digestion before consumption. The use of stone tools, and cooking, was imperative for us to begin the processing of meat and other foods. This gave us the ability to “pre-digest” our food before consumption, which increases the available energy in any food that is cooked/processed. For example, cooking denatures protein strands and breaks down the cell walls which gelatinizes the collagen in the meat which allows for easier chewing and digestion. Carmody et al (2016) showed that adaptation to a cooked diet began around 275 kya.

In his book Catching Fire, Wrangham (2009: 17-18) writes:

Raw-foodists are dedicated to eating 100 percent of their diets raw, or as close to 100 percent as they can manage. There are only three studies of their body weight, and all find that people who eat raw tend to be thin. The most extensive is the Giessen Raw Food study, conducted by nutritionist Corinna Koebnick and her colleagues in Germany, who used questionnaires to study 513 raw-foodists who ate from 70 to 100 percent of their diet raw. They chose to eat raw to be healthy, to prevent illness, to have a long life, or to live naturally. Raw food included not only uncooked vegetables and occasional meat, but also cold-pressed oil and honey, and some items were lightly heated such as dried fruits, dried meat, and dried fish. Body mass index (BMI), which measures weight in relation to the square of the height, was used as a measure of fatness. As the proportion of food eaten raw rose, BMI fell. The average weight loss when shifting from a cooked to a raw food diet was 26.5 pounds (12 kilograms) for women and 21.8 pounds (9.9 kilograms) for men. Among those eating a purely raw diet (31 percent), the body weights of almost a third indicated chronic energy deficiency. The scientists’ conclusion was unambiguous: “a strict raw food diet cannot guarantee an adequate energy supply.”

Also, vegetarians and meat-eaters who cooked their food have similar body weights. This implies that cooking food—no matter the type—gives more caloric energy to use for the body and that raw-foodists are fighting a losing battle with biology, consuming raw foods at such a high quantity that our guts are not used for. As can be seen above in the citation from Fonseca-Azevedo and Herculano-Houzel (2012), great apes who eat nothing but raw food have large guts and bodies which are needed to consume the raw plant foods they eat but we cannot thrive on such a diet because it is not calorically nor nutritionally viable for us—most importantly due to the size of our brains and its caloric requirements.

Carmody, Weintraub, and Wrangham (2011) show that modern raw-foodists who subsist on raw meat and plants have nutrient deficiencies and chronic energy deficiencies, even though they process their foods (cooking is a form of processing, as is cutting, mashing, pounding, etc) in different manners, while females experience low fecundity. Thus, the cooking of food seems to be needed for normal biological functioning; we have clearly evolved past consuming all raw foods. So it is clear that cooking—along with meat-eating—was imperative to our evolution. (Which does not mean that humans only ate meat and that eating meat and only meat is part of our evolutionary history.) Cooking food lead to it gelatinizing which denatured the protein, leading to easier mastication of the food, which meant less force since the food was not as hard after cooking. This then led to smaller teeth, over time, which was seen in erectus (Zink and Lieberman, 2016). This was due to cooking along with tool-use: the tool-use lead to smaller particles leading to less force per bite, which eventually led to smaller teeth in our lineage.

Finally, humans are said to be “facultative carnivores.” A facultative carnivore is an animal that does best on a carnivorous diet but can survive—not thrive—on other foodstuffs when meat is not available. This, though, doesn’t make sense. Humans are eclectic feeders—omnivorous in nature. Yes, we began cooking about 1.5 mya; yes meat-eating (and the cooking of said meat) is huge in the evolution of our species; yes without meat and cooking we would not have had the energy requirements to split off from chimpanzees/great apes. But this does not mean that we do “best” on a carnivorous diet. There are about 7,105 ethnic groups in the world (Spencer, 2014: 1029), and so to say that all of these ethnies would do the same or similar, physiologically speaking, on an all-meat diet is crazy talk. The claims that we subsisted on one type of food over the other throughout our evolutionary history is a bold claim—with no basis in evolutionary history.

Marlene Zuk (2013: 103-104), author of Paleofantasy writes:

Another implication of the importance Marlowe attaches to bow hunting is that, rather than starting out as exclusively carnivorous and then adding starches and other plant material to the diet, ancient humans have been able to increase the proportion of meat only after newer technology had come about, a mere 30,000 years ago. Other anthropologists concur that the amount of meat in the human diet grew as we diverged from our other primate ancestors. All of this means that, first, contrary to the claims of many paleo-diet proponents, the earliest humans did not have an exclusively meat-based diet that we are best adapted to eat; and second, our ancestors’ diets clearly changed dramatically and repeatedly over the last tens, not to mention hundreds, thousands of years, even before the advent of agriculture.

The assumption that we were fully (or even mostly) carnivorous and then added plant foods/carbs is clearly false. “Fantasies” like this are “just-so stories”; they are nice-sounding stories, but reality is clearly more nuanced than people’s evolutionary and Stone Age imaginations. This makes sense, though. Since we evolved from an LCA (last common ancestor) with chimpanzees some 6.3 mya (Patterson et al, 2006). So why would it make sense that we would then, ultimately, only subsist on an all-meat diet, if our LCA with chimpanzees was most likely a forager who lived in the trees (Lieberman, 2013).

One thing, though, I’m sure that everyone agrees with is that the environments we have constructed for ourselves in the first world are maladaptive—what is termed an “evolutionary mismatch” (Lieberman, 2013; Genne-Bacon, 2014). The mismatch arises from the high-carb food environments we have constructed, with cheap foodstuffs that is loaded with sugar, salt, and fat which is much more addictive than on their own (see Kessler, 2010). This makes food more palatable and people then want to eat it more. Foods like this, obviously, were not in our OEE (original evolutionary environment), and therefore cause us huge problems in our modern-day environments. Evolutionary mismatches occur when technological advancement increases faster than the genome can adapt. This can clearly be seen in our societies and the explosion of obesity over the past few decades (Fung, 2016, 2018).

We did not evolve eating highly processed carbohydrates loaded with salt and sugar. That much everyone can agree on.

Conclusion

It is clear that both claims from vegans/vegetarians and carnivores are false: there is no one “human diet” that we “should” be eating. Individual variation in different physiologic processes implies that there is no one “human diet”, no matter what type of food is being pushed as “what we should be” eating. Humans are eclectic feeders; we will eat anything since “Humans show remarkable dietary flexibility and adaptability“. Furthermore, we also “have a relatively unspecialized gut, with a colon that is shorter relative to overall size than in other apes; this is often attributed to the greater reliance on faunivory in humans (Chivers and Langer 1994)” (Ulijaszek, Mann, and Elton, 2013: 58). Our dietary eclectism can be traced back to our Australopithecine ancestors. The claim that we are either “vegetarian/vegan or carnivore” throughout our evolution is false.

Humans aren’t “natural carnivores” or “natural vegans/vegetarians.” Humans are eclectic feeders. Animals eat whatever is in their ecosystem. Ergo humans are omnivores, though we can’t pinpoint what the “human diet” is since there is great variability in it due to culture/ecology, we know one thing: we did not subsist on mainly only one food; we had a large variety of food, especially with fallback foods, to consume throughout our evolutionary history. So claims that we evolved to eat a certain way (as vegans/vegetarians and carnivores claim) is false. (Note I am not saying that high carb diets are good; I’ve railed hard on them.)

Just-so Stories: FOXP2

1200 words

FOXP2 is a so-called “gene for” language. The gene is a transcription factor—meaning that it controls the activity of other genes. Thus, changes to FOXP2 will have changes to other genes as well. Thus, the evolution of language in humans was thought to have hinged on mutations on the FOXP2 gene. Humans that have a single-point mutation in FOXP2 “have impaired speech and grammer, but not impaired language comprehension” (Mason, et al, 2018: 403). This gene is found in numerous mammals (e.g., chimpanzees, gorillas, orangutans, rhesus macaques, and mice) but none of those mammals speak. This gene, then, is expressed in the areas of the brain that affects motor functioning, which includes the coordination needed to create words.

Mice and humans at the FOXP2 gene only differ by 3 amino acids. Only one amino acid difference exists between gorillas, chimps, mice, and macaques, who all have identical amino acid sequences on FOXP2. Furthermore, two more amino acid sequences differ between humans and the sequences which is shared by chimpanzees, gorillas, and macaques. Thus, the difference of two amino acids between humans and other primates appears to have made it possible for language to evolve. Evidence exists for strong selective pressures for the two FOXP2 mutations which allow the brain, larynx, and mouth to coordinate to produce speech. These two altered amino acids may change the ability of FOXP2 transcription factor to be phosphorylated—proteins are either activated by phosphorylation or deactivated by dephosphorylation, or the reverse.

Mason et al (2018: 403) write:

Comparative genomics efforts are now extending beyond primates. A role for FOXP2 in songbird singing and vocal learning has been proposed. Mice communicate via squeaks, with lost young mice emitting high-pitched squeaks, FOXP2 mutations leave mice squeakless. For mice and songbirds, it is a stretch to claim that FOXP2 is a language gene—but it is likely needed in the neuromuscular pathway to make sounds.

FOXp2

Above is Figure 18.17 from Mason et al (2018: 403). They write:

Comparisons  of synonymous and nonsynonymous changes in mouse and primate FOXP2 genes indicate that changing two amino acids in the gene corresponds to the emergence of human language. Black bars represent synonymous changes; gray bars represent nonsynymous changes.

But is that the whole story? Is FOXP2 really a “gene for” language? New results call this hypothesis into question.

In their paper No Evidence for Recent Selection at FOXP2 among Diverse Human Populations, Atkinson et al (2018) did not find evidence for recent positive or balancing selection. Atksinson et al (2018) conclude that they:

do not find evidence that the FOXP2 locus or any previously implicated site within FOXP2 is associated with recent positive selection in humans. Specifically, we demonstrate that there is no evidence that the original two amino-acid substitutions were targeted by a recent sweep limited to modern humans <200 kya as suggested by Enard et al. (2002) … Any modified function of the ROI does not appear to be related to language, however, as modern southern African populations tolerate high minor allele frequencies with no apparent consequences to language faculty. We do not dispute the extensive functional evidence supporting FOXP2’s important role in the neurological processes related to language production (Lai et al., 2001, MacDermot et al., 2005, Torres-Ruiz et al., 2016). However, we show that recent natural selection in the ancestral Homo sapiens population cannot be attributed to the FOXP2 locus and thus Homo sapiens’ development of spoken language.

So the two mutations in exon 7 of FOXP2 weren’t selected and are not responsible for human language. Most likely the accelerated rate is due to loss of function (LoF) (null allele).

The gene was originally discovered in a family that had a history of speech and language disorders (Lai et al, 2001). This “speech gene” was also found in Neanderthals in 2007 (see Krasue et al, 2007). Thus, the modifications to FOXP2 occurred before humans and Neanderthals diverged.

So Atkinson et al (2018) found that the so-called sweep on FOXP2 >200KYA was a statistical artifact which was caused by lumping Africans together Caucasians and other populations. Of course, language is complicated and no one single gene will explain the emergence of human language.

This is a just-so story—that is, an ad hoc hypothesis. Humans had X, others didn’t have X or had a different form of X; therefore X explains human language faculties.

Atkinson et al’s (2018)results represent a substantial revision to the adaptive history of FOXP2, a gene regarded as vital to human evolution.

High evolutionary constraint among taxa but variability within Homo sapiens is compatible with a modified functional role for this locus in humans, such as a recent loss of function.

Therefore, this SNP must not be necessary for language function as both alleles persist at high frequency in modern human populations. Though perhaps obvious, it is important to note that there is no evidence of differences in language ability across human populations. (Atkinson et al, 2018)

This is another just-so story (Gould and Lewontin, 1976Lloyd, 1999Richardson, 2007; Nielsen, 2009) that seems to have bitten the dust. Of course, the functionality of FOXP2 and its role in the neurologic processes related to language; what is disputed (and refuted) is the selectionist just-so story. Selectionist explanations are necessarily ad-hoc. Thus, recent natural selection in our species cannot be attributed to FOXP2, and along with it, our language capabilities.

There is a similar objection, not for FOXP2 and selectionist hypotheses, but for the Lactase gene. Nielsen (2009) puts it succinctly:

The difference in lactose intolerance among human geographic groups, is caused by a difference in allele frequencies in and around the lactase gene (Harvey et al. 1998; Hollox et al. 2001; Enattah et al. 2002; Poulter et al. 2003). … This argument is not erected to dispute the adaptive story regarding the lactase gene, the total evidence in favor of adaptation and selection related to lactose tolerance is overwhelming in this case, but rather to argue that the combination of a functional effect and selection does not demonstrate that selection acted on the specific trait in question. … Although the presence of selection acting on genes underlying a phenotypic trait of interest does help support adaptive stories, it does not establish that selection acted directly on the specific trait of interest.

Even if there were evidence of positive selection of FOXP2 in humans, we cannot logically state that selection acted on the FOXP2 locus; functional effects and selection do not demonstrate that “selection” acted on that trait. Just-so stories (ad hoc hypotheses) “sound good”, but that’s only because they are necessarily true—one can have all the data they want, then they can think up any adaptive story to explain the data and the story will be necessarily true. Therefore, selectionist hypotheses are inherently ad hoc.

In conclusion, another selectionist hypothesis bites the dust. Nevermind the fact that, if FOXP2 were supposedly “selected-for”, there would still be the problem of free-riders (Fodor and Piattelli-Palmarini, 2010). That is, “selection” cannot “select-for” fitness-enhancing traits if/when they are coextensive with other traits—there is no way for selection to distinguish between coextensive traits and thus, it does not explain trait fixation (in this case, the fixation of FOXP2). Ad-hoc hypotheses are necessarily true—that is, they explain the data they purport to explain and only the data they purport to explain. These new results show that there is no support for positive selection at the FOXP2 locus.