Home » Search results for 'gould'
Search Results for: gould
Wind back the tape of life to the origin of modern multicellular animals in the Cambrian explosion, let the tape play again from this identical starting point, and the replay will populate the earth (and generate a right tail of life) with a radically different set of creatures. The chance that this alternative set will contain anything remotely like a human being must be effectively nil, while the probability of any kind of creature endowed with self‐consciousness must also be extremely small. (Gould, 1996. Full House)
Wind back the tape of life to the early days of the Burgess Shale; let it play again from an identical starting point, and the chance becomes vanishingly small that anything like human intelligence would grace the replay. (Gould, 1987. Wonderful Life)
Wind back the clock to Cambrian times, half a billion years ago, when mammals first exploded into the fossil record, and let it play forwards again. Would that parallel be similar to our own? Perhaps the hills would be crawling with giant terrestrial octopuses. (Lane, 2015: 21. The Vital Question)
I first read Full House (Gould, 1996) about two years ago. I never was one to believe in evolutionary “progress”, though. As I read through the book, seeing how Gould weaved his love for baseball into an argument against evolutionary “progress” enthralled me. I love baseball, I love evolution, so this was the perfect book for me (indeed, one of my favorite books I have read in my life—and I have read a lot of them). The basic argument goes like this: There are more bacteria on earth than other animals deemed more “advanced”; if evolutionary “progress”—as popularly believed— were true, then there would be more “advanced” mammals than bacteria; there are more bacteria (“simpler: animals) than mammals (more “advanced” animals); therefore evolutionary “progress” is an illusion.
Evolutionary “progress” is entrenched in our society, as can be seen from popular accounts of human evolution (see picture below):
This is the type of “progress” that permeates the minds of the public at large.
Some may look at the diversity of life and conclude that there is a type of “progress” to evolution. However, Gould dispatches with this type of assertion with his drunkard argument. Imagine a drunkard leaving the bar. There is the bar wall (the left wall of complexity) and the gutter (the right wall of complexity). As the drunkard walks, he may stumble in between the left wall and the gutter, but he will always end up in the gutter every time.
Gould explains then explains his reasoning for using this type of argument:
I bring up this old example to illustrate but one salient point: In a system of linear motion structurally constrained by a wall at one end, random movement, with no preferred directionality whatever, will inevitably propel the average position away from a starting point at the wall. The drunkard falls into the gutter every time, but his motion includes no trend whatever toward this form of perdition. Similarly, some average or extreme measure of life might move in a particular direction even if no evolutionary advantage, and no inherent trend, favor that pathway (Gould, 1996: 151).
The claim that there is a type of “progress” to evolution is only due to the fact—in my opinion—that humans exist and are the most “advanced” species on earth.
It seems that JP Rushton did not read this critique of evolutionary “progress”, since not even a year after Gould published Full House, Rushton published anew edition of Race, Evolution, and Behavior (Rushton, 1997) where Rushton argues (on pages 292-294) that there is, indeed, “progress” to evolution. He cites Aristotle, Darwin (1859), Wilson (1975) Russell (1983, 1989; read my critique of Russel’s theory), and Bonner.
To be brief:
The Great Chain of Being (which Rushton’s r/K selection theory attempts to revive) is not valid; Wilson’s idea of “biological progression” is taken care of by Gould’s drunkard argument; Bonner asks why there has been evolution from simple to advanced, and this, too, is taken care of by Gould’s drunkard argument, and finally Dale Russel’s argument about the troodon (I will expand on this below).
Rushton claims that Russell, in his 1989 book Odysseys in Time: Dinosaurs of North America (which I bought specifically to get more info on Russel’s thoughts on the matter and to get more information for an article on it) that “if [dinosaurs] had not gone extinct, dinosaurs would have progressed to a large-brained, bipedal descendent” (Rushton, 1997: 294). Either Rushton only glanced at Russel’s writings or he’s being inherently dishonest: Russel claimed that had the dinosaurs not gone extinct, one dinosaur—the troodon—would have evolved into a bipedal, human-like being. Russel made these claims since the troodon had EQs about 6 times the size of the average dinosaur and they ran on two legs and had use of their ‘hands.’ So, due to this, Russel argues that had the dinosaurs not gone extinct, the troodons could possibly have been human-like. However, there are two huge problems for this hypothesis.
In the book Up From Dragons, Skoyles and Sagan (2002: 12) write:
But cold-bloodedness is a dead-end for the great story of this book—the evolution of intelligence. Certainly reptiles could evolve huge sizes, as they did over vast sweeps of Earth as dinosaurs. But they never could have evolved our quick-witted and smart brains. Being tied to the sun restricts their behavior: Instead of being free and active, searching and understanding the world, they spend too much time avoiding getting too hot or too cold.
So, since dinosaurs are cold-blooded and being tied to the sun restricts their behavior, if they would have survived the K-T extinction event, then it is highly implausible that they would have grown brains our size.
Furthermore, Hopson (1977: 444) writes:
I would argue, as does Feduccia (44), that the mammalian/avian levels of activity claimed by Bakker for dinosaurs should be correlated with a great increase in motor and sensory control and this should be reflected in increased brain size. Such an increase is not indicated by most dinosaur endocasts.
Gould even writes in Wonderful Life:
If mammals had arisen late and helped to drive dinosaurs to their doom, then we could legitimately propose a scenario of expected progress. But dinosaurs remained dominant and probably became extinct only as a quirky result of the most unpredictable of all events—a mass dying triggered by extraterrestrial impact. If dinosaurs had not died in this event, they would probably still dominate the large-bodied vertebrates, as they had for so long with such conspicuous success, and mammals would still be small creatures in the interstices of their world. This situation prevailed for one hundred million years, why not sixty million more? Since dinosaurs were not moving towards markedly larger brains, and since such a prospect may lay outside the capability of reptilian design (Jerison, 1973; Hopson, 1977), we must assume that consciousness would not have evolved on our planet if a cosmic catastrophe had not claimed the dinosaurs as victims. In an entirely literal sense, we owe our existence, as large reasoning mammals, to our lucky stars. (Gould, 1989: 318)
I really don’t think it’s possible that brains our size would have evolved had the dinosaurs not gone extinct, and the data we have about dinosaurs strongly points to that assertion.
Staying on the topic of progression and brain size, there is one more thing I want to note. Deacon (1990a) argues that fallacies exist in the assertion that brain size progressed throughout evolutionary history. One of Deacon’s fallacies is the “evolutionary progression fallacy.” The concept of “progress” finds refuge “implicit expression in the analysis of brain-size differences and presumed grade shifts in allometric brain/body size trends, in theories of comparative intelligence, in claims about the relative proportions of presumed advanced vs. primitive brain areas, in estimates of neural complexity, including the multiplication and differentiation of brain areas, and in the assessment of other species with respect to humans, as the presumed most advanced exemplar” (Deacon, 1990a: 195).
This, in my opinion, is the last refuge for progressionists: looking at the apparent rise of brain size in evolutionary history and saying “Aha! There it is—progress!” So, the so-called progress in brain size evolution is only due to allometric processes, there is no true “progress” in brain size, no unbiased allometric baseline exists, therefore these types of claims from progressionists fail. Lastly, Deacon (1990b) argues that so-called brain size progress vanishes when functional specialization is taken into account.
Therefore it is unlikely that dinosaurs would have evolved brains our size.
In sum, there are many ways that progressionists attempt to show that there is “progress” in evolution. However, they all fail since Gould’s argument is always waiting to rear its head. Yes, some organisms have evolved greater complexity—i.e., moved toward the right wall—though this is not evidence for “progress.” Many—if not all—accounts of “progress” fail. There is no “progress” in brain size evolution; there would not be human-like dinosaurs had the dinosaurs not gone extinct in the K-T extinction event. We live on a planet of bacteria, and since we live on a planet of bacteria—that is, since bacteria are the most numerous type of organism on earth, evolutionary progress cannot be true.
Complexity—getting to the right wall—is an inevitability, just as it is an inevitability that the drunkard would eventually stumble to the gutter. But this does not mean that there is “progress” to evolution.
The argument in Gould’s Full House can be simply stated like this:
P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.
Stephen Jay Gould was one of the biggest opponents of hereditarianism, one of Rushton and Jensens’s biggest opponents. He is the author of The Mismeasure of Man, which is still given to college students to read as a “definitive refutation of The Bell Curve” and an all out attack on factor analysis, IQ testing and the whole hereditarian position at large. A passage from the very end of his book Full House perfectly explains his thought process on this matter:
“The most impressive contrast between natural evolution and cultural evolution lies embedded in the major fact of our history. We have no evidence that the modal form of human bodies or brains has changed at all in the past 100,000 years—a standard phenomenon of stasis for successful and widespread species, and not (as popularly misconceived) an odd exception to an expectation of continuous and progressive change. The Cro-Magnon people who painted the caves of the Lascaux and Altamira some fifteen thousand years ago are us—and one look at the incredible richness and beauty of this work convinces us, in the most immediate and visceral way, that Picasso held no edge in mental sophistication over these ancestors with identical brains. And yet, fifteen thousand years ago no human social grouping had produced anything that would conform with our standard definition of civilization. No society had yet invented agriculture; none had built permanent cities. Everything that we have accomplished in the unmeasurable geological moment of the last ten thousand years—from the origin of agriculture to the Sears building in Chicago, the entire panoply of human civilization for better or for worse—has been built upon the capacities of an unaltered brain. Clearly, cultural change can vastly outstrip the maximal rate of natural Darwinian evolution.” (Gould, 1996: 220)
He wrote Full House as a sequel of sorts to his book Wonderful Life: The Burgess Shale and the Nature of History (currently on the way to my home which I will read in a few days of getting it), where he argues that progress is not the driver to evolution and that complexity does not rule as bacteria rule the planet. He argues that we are not in the “Age of Mammals”, but the “Age of Bacteria”. But how could you argue that there was no change in humanity from our most recent ancestors to today?
Eldredge and Gould pioneered the theory of punctuated equilibria in 1972. The theory states that species lie in a state of stasis (that is, a period of inactivity or equilibrium) and there is little morphological change before there is a rapid burst of change, which perfectly explains why there are few transitional fossils to be found. Punctuated equilibria is the missing piece to Darwin’s theory of evolution. But what does it have to do wth the evolution of Man?
As you can see, Eldredge and Gould’s theory states that all species spend an extremely long time in stasis, and for any phenotypic change to be noticed in the fossil record, the rapid burst in change had to occur.
Quoting Gould on culture and evolution (1996, page 219-20):
But human cultural change is an entirely distinct process operating under radically different principals that do allow for the strong possibility of a driven trend for what we may legitamately call “progress” (at least in a technological sense, whether or not the changes ultimately do us any good in a practical or moral way). In this sense, I deeply regret that common usage refers to the history of our artifacts and social orginizations as “cultural evolution.” Using the same term—evolution—for both natural and cultural history obfuscates far more than it enlightens. Of course, some aspects of the two phenomena must be similar, for all processes of genealogicallt constrained historical change must share some features in common. But the differences far outweigh the similarities in this case. Unfortunately, when we speak of “cultural evolution,” we unwittingly imply that this process shares essential similarity with the phenomenon most widely described by the same name—natural, or Darwinian, change. The common designation of “evolution” then leads to one of the most frequent and portentious errors in our analysis of human life and history—the overly reductionist assumption that the Darwinian natural paradigm will fully encompass our social and technological history as well. I do wish that the term “cultural evolution” would drop from use. Why not speak of something more neutral and descriptive—“cultural change,” for example?
From the two passages I cited above, to his work on punctuated equilibria, I can definitely see how and why he would believe that there has been no relevant human evolution in the past 50,000 years. These two quotes, one from Stephen Jay Gould and the other from evolutionist Ernst Mayr show the “conventional wisdom” about human evolution:
There’a been no biological changes in humans in 40,000 or 50,000 years. Everything we call culture we’ve built with the same body and brain
—Stephen Jay Gould
Something must have happened to weaken the selective pressure drastically. We cannot escape the conclusion that man’s evolution towards manness suddenly came to a halt.
These quotes are from page 1 of The Ten Thousand Year explosion. Many great thinkers have suggested that human evolution has halted ever since the emergence of behavioral modernity, however, this couldn’t be further from the truth. I fully understand why such great evolutionists like Gould and Mayr believe that human evolution has halted and their arguments make complete sense based on the data (punctuated equilibria for one). But to any knowledgeable race-realist, they know that these claims are bunk and that human evolution has most definitely accelerated within the last 10,000 years (due to agriculture, the advent of farming) that made it possible for a bigger population and, along with it, a higher chance for high IQ alleles and other positive traits to spread throughout the population as it increased fitness in the environment.
HOWEVER, agriculture was good and bad for us. The good increased our population size that made it possible for high IQ alleles to spread throughout the population. The bad was along with an increase in population size, living in one spot with large groups of people upped the chances for disease acquisition, that of which are not found in hunter-gatherer populations (because they’re constantly moving, not staying in one place). According to John Hawks, our brain size has decreased, going from 1500 cc on average to 1350 cc on average, and the cause is, and this is hard to believe with the advent of agriculture (and thus, supposedly better nutrition) worse nutrition due to the advent of agriculture. Another reason I can posit is that due to more group behavior and social cohesion, we could work together with others and that, over time, would shrink our brains since we wouldn’t have to “do all the thinking”, a type of “self-domestication”, if you will.
The denial of any human change over the past 50,000 years is clearly ridiculous, however it is grounded in solid science. But with the advent of The Ten Thousand Year Explosion by Cochran and Harpending, they blasted all of the misconceptions away about no genetic change in humanity over the past 50,000 years. But, to the dismay of those who believe in “progressive evolution”, the same agriculture that was responsible for this boom—this explosion—over the past ten thousand years is also the cause of our decreasing brain size and stature. I’ve documented the change of erectus or habilis into floresiensis, this is proof enough that evolution can “work backward” (whatever that means) and have an organism become “less complex” (going back to left and right walls of complexity, which I just wrote on last night). Floresiensis is the perfect example that an organism can become less complex than a predecessor and the cause, in this context, is due to less energy on the island which led to a decrease in caloric consumption and along with it a decrease in brain size since that was what was best for that environment (due to less caloric energy being available).
While Gould makes a compelling argument in arguing against the explosion of Man in the past 50,000 years, modern data tells us otherwise. This explosion was due, in part, to agriculture which led to more social cohesion (both of those variables are also leading to a decrease in brain size). With the understanding of Eldredge and Gould’s punctuated equilibria theory, you can then see how and why Gould denied the genetic change in anatomically modern humans over the last 50,000 years. He, however, is wrong here.
I fully agree with Gould that cultural change can outstrip Darwinian evolution—he is right there. But, to make the leap and then say that there is no basis for genetic change in AMH (anatomically modern humans) is clearly wrong. I know that Gould was driven by his politics, partly, to deny any change in human nature and genetics in the past 50,000 years. Though, I don’t care about that. I care about looking at one’s perspective through a scientific lens. While Gould is wrong on his views of hereditarianism, he is 100 percent correct on “progressive” evolution and that there is no so-called “drive to complexity”. It’s his views on human evolution as a whole that are wrong. We know that faster evolution gives rise to more racial differences, and, obviously, more “differences” can either be “good” or “bad” depending on the environmental context. In my tirades over the past 6 weeks on the non-progressiveness and non-linearity of evolution, I’ve shown that these differences can either go to the “left wall” or “right wall” of complexity.
To deny the speed of evolution ever since modern behavior, and even the agricultural revolution is wrong. Too much evidence has piled up for that position. I do, after reading a lot of Gould’s work recently, understand how and where he came from with that argument, all though he was clearly wrong. Culture is learned—not biologically inherited. The cultural norms we know well are learned behaviors.
Finally, and what it seems Gould didn’t realize, is that there is gene-culture coevolution. Learned social information is central to our adaptations as humans. New cultural tendencies may force a novel and new evolutionary selection pressure that may incur new phenotypic changes. In this sense, genes and culture simultaneously evolve side-by-side with each other. Again, stressing that there is no “unilateral direction” in which these changes go, they just occur based on new environmental pressures. Thusly, to say that there is any “progress” or any inherent “drive” in evolution makes no sense. Due to which cultures we “inherit” that will drive which changes occur in that population but not another, they’d be different (as we know all genetically isolated humans are), but none would be “better” than another since they have incurred new traits to better survive in that environment; each different culture will further gain a different phenotype due to the differing culture which puts a differing selective pressure on that population.
The notion of no change in humans over the last 50,000 years is wrong. It has been driven by the rise in agriculture (giving us both positive and negative traits) along with each culture that each population adopted due to the differing selection pressures and environments over the course of their evolution genetically isolated from every other human culture. These differing cultural tendencies also gave rise to slightly faster evolution and different and novel environments in comparison to other populations. With these variables working in harmony with each other, these then accelerated human evolution (for better or worse). That same advent of behavioral modernity 50,000 years ago gave rise to the Out of Africa event. Humans then spread across the planet. In time, after being differing “founding populations” for the current races/ethnies today, differing cultures were adopted due to the differing evolutionary pressures. This is the main reason why genetically isolated human populations show such stark differences between them: Because evolution has sped up since the advent of behavioral modernity, agriculture and the adoption of culture by humans that have all contributed to making Man so different compared to the rest of the Animal Kingdom.
It seems like every day something new comes out that attempts to discredit the reality of g (This paper came out in 2012.). Steven Jay Gould (in)famously wrote in The Mismeasure of Man:
The argument begins with one of the fallacies—reification, or our tendency to convert abstract concepts into entities (from the Latin res, or thing). We recognize the importance of mentality in our lives and wish to characterize it, in part so that we can make the divisions and distinctions among people that our cultural and political systems dictate. We therefore give the word “intelligence” to this wondrously complex and multifaceted set of human capabilities. (emphasis mine)
“The results disprove once and for all the idea that a single measure of intelligence, such as IQ, is enough to capture all of the differences in cognitive ability that we see between people,”
“Instead, several different circuits contribute to intelligence, each with its own unique capacity. A person may well be good in one of these areas, but they are just as likely to be bad in the other two,”
Just like The Mismeasure of Man is “the definitive refutation to the argument of The Bell Curve”, right?
In the above paper, they cite Gould twice writing:
It remains unclear, however, whether population differences in intelligence test scores are driven by heritable factors or by other correlated demographic variables such as socioeconomic status, education level, and motivation (Gould, 1981; . . .
They have been shown over numerous studies that population differences in intelligence are driven by heritable factors (Rushton and Jensen, 2005; Lynn and Vanhanen, 2006; Winick, Meyer, and Harris, 1975; Frydman and Lynn, 1988; Rushton, 2005)
More relevantly, it is questionable whether they relate to a unitary intelligence factor, as opposed to a bias in testing paradigms toward particular components of a more complex intelligence construct (Gould, 1981;
I will prove the existence of g in this article. There is also an empirical basis for the g factor.
It’s getting old now that researchers still think that they can “disprove g”, as a multitude of studies have already corroborated Spearman’s hypothesis as an empirical fact. That is, applying the scientific method, using the same hypothesis over a multitude of different studies and testing those predictions by experiment or further observation and modify the hypothesis when new information comes to light. Then, repeat the aforementioned steps until there are no discrepancies between the theory and experiment/observations.Then when consistency is obtained it then becomes a theory that provides a coherent set of premises that explain a class of events.
How many times has the Hampshire et al hypothesis been corroborated? I doubt it has been corroborated as many times as Spearman’s hypothesis has.
As I said the other day, Jensen tested Spearman’s hypothesis on 25 large independent samples, with each sample confirming Spearman’s hypothesis. Even matching blacks and whites for SES didn’t diminish the effect. Jensen then concludes that the overall chance for Spearman’s hypothesis being wrong is over 1 in a billion. Pretty high odds.
Even then, if this study were to be replicated the amount of times that Spearman’s hypothesis has, it still wouldn’t disprove g.
He (Gould) continues: “The fact that Herrnstein and Murray barely mention the factor-analytic argument forms a central indictment around The Bell Curve and is an illustration of its vacuousness.” Where, Gould asks, is the evidence that g “captures a real property in the head?
Murray states that they “barely brought up the factor-analytical argument” because it was out of date; Gould was using statistics on g that were 50 + years old. Also, a reviewer of his book for the journal Nature said that Gould’s “discussion of the theory of intelligence stops at the stage it was more than a quarter of a century ago.” Gould was using old arguments, and, as Arthur Jensen states in his response to Gould:
Of all the book’s references, a full 27 percent precede 1900. Another 44 percent fall between 1900 and 1950 (60 percent of those are before 1925); and only 29 percent are more recent than 1950.
More than half of Gould’s references in The Mismeasure of Man are outdated by more than 50 years. Clearly, he was attempting to denigrate the old studies of intelligence, i.e., phrenology, even though this recent paper in the journal Nature recently said:
The genomic regions identified include several novel loci, some of which have been associated with intracranial volume
So, we have several loci that are associated with intracranial volume; this shows that those skull studies of yesteryear weren’t crazy. Moreover, the fact that Rushton and Ankney (1996) “reviewed 32 studies correlating measures of external head size with IQ scores or with measures of educational and occupational achievement, and they found a mean r .20 for people of all ages, both sexes, and various ethnic backgrounds, including African Americans” shows that there is a correlation of .20, albeit not too high but there, with external head size and IQ. This shows that Gould’s argument on phrenology is bunk, as modern studies confirm that there is a slight correlation between head size and IQ, and therefore g.
The fact that researchers are still bringing up Gould’s arguments on g show that there really is no good argument to discount it. Basically, any and all arguments that attempt to discredit g are bunk as Spearman’s hypothesis has been empirically verified:
Conclusion: Mean group differences in scores on cognitive-loaded instruments are well documented over time and around the world. A meta-analytic test of Spearman’s hypothesis was carried out. Mean differences in intelligence between groups can be largely explained by cognitive complexity and the present study shows clearly that there is simply no support for cultural bias as an explanation of these group differences. Comparing groups, whether in the US or in Europe, produced highly similar outcomes.
Along with Jensen’s 25 large independent studies that showed that the probability that Spearman’s hypothesis is false is 1 in a billion, this proves that Spearman’s hypothesis is an empirical scientific fact.
Newman and Just, (2005) state in verbal and spatial conditions that the frontal cortex revealed greater activation for high-g in comparison to low-g, supporting the idea that g reflects functions of the frontal lobe. The “seat” of general intelligence is the prefrontal cortex (Cole, et al, 2011, Roth, 2011). This can also be verified with MRI scans that show that those who have higher g have bigger prefrontal cortexes than those with lower g.
Moreover, the fact that Colom, et al (2006) show that in their sample that neuroanatomic areas underlying the g factor could be found across the entire brain including the frontal, parietal, temporal and occipital lobes, shows that this factor is present throughout the brain and all are correlated with g and work together in concert to manifest intellectual ability.
Other researchers have also used the method of correlated vectors on functional Magnetic Resonance Imaging (fMRI), which measures brain activity by detecting changes associated with blood flow. This technique is proven useful due to the fact that cerebral blood flow and neuronal action are correlated. Lee, et al write:
In conclusion, we suggest that higher order cognitive functions, such as general intelligence, may be processed by the coordinated ability may be attributable to the functional facilitation rather than the structural peculiarity of the neural network for g. In addition, our results demonstrated that the posterior parietal regions including bilateral SPL and right IPS could be the neural correlates for superior general intelligence. These findings would be the early step toward the development of biological measures of g which leads to new perspectives for behavior interventions improving general cognitive ability.
They also used the MCV to find that the frontal and parietal lobes are associated with g. Even these studies show that g shows up throughout the brain and not in one solitary spot (though, the PFC is still the seat of intelligence), this shows yet another biological basis for g.
Hampshire, et al write:
Thus, these results provide strong evidence that human intelligence is a construct that emerges from the functioning of anatomically dissociable brain networks.
However, with the above studies confirming that the seat of intelligence is the prefrontal cortex, along with great g ability possibly be attributable to the functional facilitation rather than the structural peculiarity of the neural network for g, this shows, along with the study proving Spearman’s hypothesis, that g is a real and measurable thing. g’s seat is the prefrontal cortex, and exceptional g may possibly be attributed to the functional facilitation of the neural network for g . What all of these studies show is that all though the Hampshire paper showed how they “demonstrate that different components of intelligence have their analogs in distinct brain networks.” that a) higher order cognitive functions may be processed by the coordinated activation of widely distributed brain areas (disproving the above quote), b) the seat of g is the prefrontal cortex, c) those with more g have bigger prefrontal cortexes and therefore bigger brains since the prefrontal cortex is the ‘seat’ of intelligence and d) Spearman’s hypothesis has been corroborated numerous times by many different researchers not named Arthur Jensen.
Highfield (one of the researchers in the study) ends the article as follows:
“We already know that, from a scientific point of view, the notion of race is meaningless. Genetic differences do not map on to traditional measurements of skin colour, hair type, body proportions and skull measurements.
This is something that never ends; it always comes up no matter how many times it’s been said. People can say “race is a social construct” all they want, it doesn’t make it true as there is a biological reality to race.
Now we have shown that IQ is meaningless too,” Dr Highfield said.
When will people learn not to cite men who have smeared their legacy in an attempt to defame men who they disagreed with ideologically? Citing Steven Jay Gould in 2016 shows a bias to want to discredit g as a main factor for many things in life including SES, educational attainment, wealth attainment and so forth. The g factor is a measurable thing, with the seat of the factor being the prefrontal cortex. No amount of attempting to dispute this factor can be done, as it’s been empirically verified numerous times.
IQ is the reason why the education gap won’t be closed. We all know that. Even the Marxists know that. But, for some reason because of their egalitarian mindset, they won’t accept the facts. You know damn well that these Marxists KNOW for a FACT that things such as IQ, educational achievement, success and others are a result of genetics, but because of their idiocy, they clamor on about how ‘we are all equal’ and all of this trash. Which is, obviously, simply not true at all. If ‘environment’ is the cause of low IQ, and therefore low educational achievement, among other positive factors involving IQ and successes in life, how come negros raised in white homes don’t score the same as whites? It’s obviously genetic. Idiots like to cite Eyferth (the German study after WW2), which also had 25 percent French North Africans according to Rushton and Jensen, which ended up skewing the sample. They did not test them again at adulthood, and again, according to Rushton and Jensen, the gap in IQ really starts to become noticeable DURING those years that they tested the children at around 11 years of age.
Also, the Tizard study that put blacks, whites and mixed negro/whites in a nursery setting, 85 kids at 2 to 5 years of age, why do people cite THIS study?! It doesn’t matter in the context that we are speaking of. It’s well known that IQ is more malleable in young kids and that at adulthood that environment doesn’t matter. We also have the Moore study, which tested 25 negros, aged 7 to 10 years old raised in a white family, as well as 23 negros raised in a black family. The ones adopted into a black family scored 104, compared to the ones adopted into white families who scored 117. People may say “Well, they didn’t differ in their environment and not their genes, so, therefore, the B-W IQ gap is 100 percent environmental.” Retarded. Again, the IQ gap HEIGHTENS at this age that they tested them at. Egalitarians really need new ammo if they want to attempt to beat us. Because the facts are on our side.
We also know that educational achievement is genetic as well (60 percent heritable). The studies say nothing about race in the involvement with the achievement gap, but all you have to do is look at SAT, ACT (here is Kentucky in comparison with Louisiana, Tennessee and Mississippi), ASVAB and other similar test scores to be able to see that minorities (Hispanics and negros) do not score as high as whites or Asians. Then people like to say “Hurrr the black social structure isn’t conducive to learning and being intellectual”. Bullshit. Why may that be? Do you think that one say some negros all of a sudden said: “Let’s just say that being intelligent is stupid and beat up other negros (outliers) because they are smart.” Nah. Doesn’t work like that. At all. It’s, as we all know, genetic.
Let’s talk about Africa. Average IQ 70, due in part to nutrient deficiencies, disease, parasitic load, vitamin b deficiencies, zinc deficiencies, iron and protein deficiencies is part of the reason why their IQ is so low. Richard Lynn states that with proper nutrition, their IQ will be able to jump 13 points (which would only be possible with the white man because they are too stupid to learn HOW to farm as evidenced by Zimbabwe/Rhodesia). I’m hugely interested in nutrition and what nutrient deficiencies do in individuals and how they affect each individual as well as groups. People like to say the colonialism is the cause for the low HDI in Africa. Lies. Ethiopia was never colonized (it was in WW2 by Italy IIRC), and they still aren’t too well off, though they do have a 71 IQ, 4 points higher than the average according to Richard Lynn, which lines up with HBD, cause being that Ethiopians do have a high amount of Caucasoid ancestry.
So why do we get attacked? And called the buzzword ‘racist’ (which I should rally to change it to ‘ethnocentrist’)? It’s because, even though people may not have the knowledge, nor the intellect to know what we are talking about, they know deep down that we are right. I completely understand (though I fucking hate it) egalitarianism, but it’s a ridiculous concept. It simply will not work at all. Save, CRISPR gene editing. Obviously, once we identify intelligence genes (people like Steve Hsu are actively researching these things), we will be able to effect intelligence with CRISPR genome editing. Though, negros will still be negros. Low IQ isn’t all of it, but it has a lot to do with it, along with other biologic factors such as higher average testosterone (don’t show me that bullshit study about Meccies having higher test, they didn’t test free testosterone), MAOA-L gene, the 2 repeat version, as well as lack of empathy (currently, only Rhodesians were tested for this, but with more tests we know that it will come out that our good friends the African Americans will have the same).
Not even just normal people with everyday lives get heckled into not speaking out, but academics such as Rushton and Jensen and Herrnstein and Wilson had to cancel lectures because they got threats. People said, “Wilson, Hernnstein you can’t hide, you believe in genocide”.Why are they so scared about academic scholars, researchers and psychologists speaking to a group who wants to hear them speak? You have things like the Rushton/Suzuki debate, where Suzuki just threw ad hominems at Rushton, meanwhile, Rushton stayed calm, cool and collected, and stated the facts while Suzuki just gave character attacks and ad hominems saying TAKE HIS FUNDING AND FIRE HIM. Why is he so scared? He, as a geneticist, should KNOW about the genetic difference in IQ between individuals as well as group differences, but, he is a Marxist, such as Lewontin and Gould (who I will get to later). Rushton completely DESTROYED HIM in this debate.
We have Jews like Lewontin who say “The genetic distance in ethnicity is more than that of the genetic distance BETWEEN groups. Yes, this is true. But that doesn’t invalidate the ACTUALITY of race. We are 98.5 percent the same genetically to chimps, so obviously that small difference is enough to bring HUGE changes between us. It’s also not just the differences in the DNA between us, BUT HOW THE GENES ARE EXPRESSED that give the differences between the 2 species. So, with that being said, the so-called ‘small genetic distance between races’ DOES STILL MATTER, because it’s HOW THOSE DIFFERENCES IN GENES ARE EXPRESSED, and NOT the differences between them. Lewontin is also a self-professed Marxist, like our other friend, Steven Gould.
We all know that Gould is a liar. We all know that he deliberately fudged the data on Morton’s skulls. Why it took THIRTY YEARS to unveil the fraud that he did on a dead man, Samuel Morton, is beyond me. Him, along with Lewontin, put their political ideology of MARXISM before actual SCIENCE. He also lied about the data in The Bell Curve, as well as sex differences in intelligence, brain size, early IQ testing, the reality of the g factor, race and IQ, race and brain size, natural born criminals, between-group heritabilities, and evolutionary selection. Rushton destroys his ‘book’ The Mismeasure of Man. People have said to me “Don’t talk about a man who isn’t here to defend his views and what he wrote”. OH, YOU MEAN LIKE HOW HE TRIED TO SAY THAT MORTON’S SKULL DATA WAS WRONG??!?!?!? For some irony, here is such an ironic quote I saw while reading his book the other day:
May I end up next to Judas Iscariot, Brutus, and Cassius in the devil’s mouth at the center of hell if I ever fail to present my most honest assessment and best judgment of evidence for empirical truth. (pg 39, The Mismeasure of Man)
Now to get to Flynn (I really need to make a comprehensive post on this, will put it in the backlog). The man who continues to cite the Eyferth study, despite it being having nothing to do with this conversation as they didn’t retest the kids at adulthood. The term ‘Flynn Effect’ was termed by Murray and Herrnstein in The Bell Curve (which was also noticed by Richard Lynn in the 70s. Rushton calls it the “Lynn-Flynn Effect”). But, the ‘Flynn Effect’ is hogwash. Yea, IQ has been growing at a rate of 3 points per decade, in every country you look at since the 30s. But, the gap STILL persists. In 1945, the average white IQ was 85, right at the black average today. Richard Lynn stated that the ‘Flynn Effect’ is due to better nutrition, which I am inclined to agree with him there. So, even with the 3 points of increase over the past 70 years, the gap STILL PERSISTS, proving that it’s GENETIC IN ORIGIN. Yea, blacks are getting more intelligent (though, at the present time, blacks are showing a sharper dysgenic decline than whites) but whites are as well, and even with this 3 points jump in IQ per decade, the gap is still there. It IS genetic.
You have OTHER Marxists like Jared Diamond (do you see a trend? =^) ) that say that group differences come from the environment that they live in and evolved in. For instance that it’s harder to farm in Africa, domesticatable animals, according to him, only 13 animals have been domesticated. He even says that New Guineans MAY be more intelligent than Europeans (RIDICULOUS), did he admit that ENVIRONMENT CAUSE INTELLIGENCE DIFFERENCES DUE TO ENVIRONMENT?!?!??! He says that he got these views when one of his New Guinean friends said “Why do you have more cargo (physical possessions) than us? It’s clear that his Marxist egalitarian views PUSHED HIM to write his shitty ass book. Has been discredited by Rushton here.
We are on the verge of a paradigm shift. The longer the BL(don’t)M keeps up with their nonsense and non-arguments, as well as all of these other negros who call for a race war and to kill KKKops, the more that the sensible, high IQ American people will come to our side. Paradigms don’t stay the same way forever. Ever since the 60s we have been in an ever growing dystopia of Marxism and other leftist positions, which clearly don’t work.
WE WILL WIN THIS CULTURE WAR. This war isn’t physical, it’s a battle of minds, sources and intellect, which we clearly have. It’s up to people like us to defend the work of Rushton, Jensen and Herrnstein. They did what they could to get the truth out. They risked their safety, livelihood and academic credence to get this data out, despite what the egalitarian Marxists believed.
What would you think if you heard about a new fortune-telling device that is touted to predict psychological traits like depression, schizophrenia and school achievement? What’s more, it can tell your fortune from the moment of your birth, it is completely reliable and unbiased — and it only costs £100.
This might sound like yet another pop-psychology claim about gimmicks that will change your life, but this one is in fact based on the best science of our times. The fortune teller is DNA. The ability of DNA to understand who we are, and predict who we will become has emerged in the last three years, thanks to the rise of personal genomics. We will see how the DNA revolution has made DNA personal by giving us the power to predict our psychological strengths and weaknesses from birth. This is a game-changer as it has far-reaching implications for psychology, for society and for each and every one of us.
This DNA fortune teller is the culmination of a century of genetic research investigating what makes us who we are. When psychology emerged as a science in the early twentieth century, it focused on environmental causes of behavior. Environmentalism — the view that we are what we learn — dominated psychology for decades. From Freud onwards, the family environment, or nurture, was assumed to be the key factor in determining who we are. (Plomin, 2018: 6, my emphasis)
The main premise of Plomin’s 2018 book Blueprint is that DNA is a fortune teller while personal genomics is a fortune-telling device. The fortune-telling device Plomin most discusses in the book is polygenic scores (PGS). PGSs are gleaned from GWA studies; SNP genotypes are then added up with scores of 0, 1, and 2. Then, the individual gets their PGS for trait T. Plomin’s claim—that DNA is a fortune teller—though, falls since DNA is not a blueprint—which is where the claim that “DNA is a fortune teller” is derived.
It’s funny that Plomin calls the measure “unbiased”, (he is talking about DNA, which is in effect “unbiased”), but PGS are anything BUT unbiased. For example, most GWAS/PGS are derived from European populations. But, for example, there are “biases and inaccuracies of polygenic risk scores (PRS) when predicting disease risk in individuals from populations other than those used in their derivation” (De La Vega and Bustamante, 2018). (PRSs are derived from statistical gene associations using GWAS; Janssens and Joyner, 2019.) Europeans make up more than 80 percent of GWAS studies. This is why, due to the large amount of GWASs on European populations, that “prediction accuracy [is] reduced by approximately 2- to 5-fold in East Asian and African American populations, respectively” (Martin et al, 2018). See for example Figure 1 from Martin et al (2018):
With the huge number of GWAS studies done on European populations, these scores cannot be used on non-European populations for ‘prediction’—even disregarding the other problems with PGS/GWAS.
By studying genetically informative cases like twins and adoptees, behavioural geneticists discovered some of the biggest findings in psychology because, for the first time, nature and nurture could be disentangled.
… DNA differences inherited from our parents at the moment of conception are the consistent, lifelong source of psychological individuality, the blueprint that makes us who we are. A blueprint is a plan. … A blueprint isn’t all that matters but it matters more than everything else put together in terms of the stable psychological traits that make us who we are. (Plomin, 2018: 6-8, my emphasis)
Nevermind the slew of problems with twin and adoption studies (Joseph, 2014; Joseph et al, 2015; Richardson, 2017a). I also refuted the notion that “A blueprint is a plan” last year, quoting numerous developmental systems theorists. The main thrust of Plomin’s book—that DNA is a blueprint and therefore can be seen as a fortune teller using the fortune-telling device to tell the fortunes of the people’s whose DNA are analyzed—is false, as DNA does not work how it does in Plomin’s mind.
These big findings were based on twin and adoption studies that indirectly assessed genetic impact. Twenty years ago the DNA revolution began with the sequencing of the human genome, which identified each of the 3 billion steps in the double helix of DNA. We are the same as every other human being for more than 99 percent of these DNA steps, which is the blueprint for human nature. The less than 1 per cent of difference of these DNA steps that differ between us is what makes us who we are as individuals — our mental illnesses, our personalities and our mental abilities. These inherited DNA differences are the blueprint for our individuality …
[DNA predictors] are unique in psychology because they do not change during our lives. This means that they can foretell our futures from our birth.
The applications and implications of DNA predictors will be controversial. Although we will examine some of these concerns, I am unabashedly a cheerleader for these changes. (Plomin, 2018: 8-10, my emphasis)
This quote further shows Plomin’s “blueprint” for the rest of his book—DNA can “foretell our futures from our birth”—and how it affects his conclusions gleaned from his work that he mostly discusses in his book. Yes, all scientists are biased (as Stephen Jay Gould noted), but Plomin outright claimed to be an unabashed cheerleader for his work. Plomin’s self-admission for being an “unabashed cheerleader”, though, does explain some of the conclusions he makes in Blueprint.
However, the problem with the mantra ‘nature and nurture’ is that it runs the risk of sliding back into the mistaken view that the effects of genes and environment cannot be disentangled.
Our future is DNA. (Plomin, 2018: 11-12)
The problem with the mantra “nature and nurture” is not that it “runs the risk of sliding back into the mistaken view that the effects of genes and environment cannot be disentangled”—though that is one problem. The problem is how Plomin assumes how DNA works. That DNA can be disentangled from the environment presumes that DNA is environment-independent. But as Moore shows in his book The Dependent Gene—and as Schneider (2007) shows—“the very concept of a gene requires the environment“. Moore notes that “The common belief that genes contain context-independent “information”—and so are analogous to “blueprints” or “recipes”—is simply false” (quoted in Schneider, 2007). Moore showed in The Dependent Gene that twin studies are flawed, as have numerous other authors.
Lewkowicz (2012) argues that “genes are embedded within organisms which, in turn, are embedded in external environments. As a result, even though genes are a critical part of developmental systems, they are only one part of such systems where interactions occur at all levels of organization during both ontogeny and phylogeny.” Plomin—although he does not explicitly state it—is a genetic reductionist. This type of thinking can be traced back, most popularly, to Richard Dawkins’ 1976 book The Selfish Gene. The genetic reductionists can, and do, make the claim that organisms can be reduced to their genes, while developmental systems theorists claim that holism, and not reductionism, better explains organismal development.
The main thrust of Plomin’s Blueprint rests on (1) GWA studies and (2) PGSs/PRSs derived from the GWA studies. Ken Richardson (2017b) has shown that “some cryptic but functionally irrelevant genetic stratification in human populations, which, quite likely, will covary with social stratification or social class.” Richardson’s (2017b) argument is simple: Societies are genetically stratified; social stratification maintains genetic stratification; social stratification creates—and maintains—cognitive differentiation; “cognitive” tests reflect prior social stratification. This “cryptic but functionally irrelevant genetic stratification in human populations” is what GWA studies pick up. Richardson and Jones (2019) extend the argument and argue that spurious correlations can arise from genetic population structure that GWA studies cannot account for—even though GWA study authors claim that this population stratification is accounted for, social class is defined solely on the basis of SES (socioeconomic status) and therefore, does not capture all of what “social class” itself captures (Richardson, 2002: 298-299).
Plomin also heavily relies on the results of twin and adoption studies—a lot of it being his own work—to attempt to buttress his arguments. However, as Moore and Shenk (2016) show—and as I have summarized in Behavior Genetics and the Fallacy of Nature vs Nurture—heritability estimates for humans are highly flawed since there cannot be a fully controlled environment. Moore and Shenk (2016: 6) write:
Heritability statistics do remain useful in some limited circumstances, including selective breeding programs in which developmental environments can be strictly controlled. But in environments that are not controlled, these statistics do not tell us much. In light of this, numerous theorists have concluded that ‘the term “heritability,” which carries a strong conviction or connotation of something “[in]heritable” in the everyday sense, is no longer suitable for use in human genetics, and its use should be discontinued.’ 31 Reviewing the evidence, we come to the same conclusion.
Heritability estimates assume that nature (genes) can be separated from nurture (environment), but “the very concept of a gene requires the environment” (Schneider, 2007) so it seems that attempting to partition genetic and environmental causation of any trait T is a fool’s—reductionist—errand. If the concept of gene depends on and requires the environment, then how does it make any sense to attempt to partition one from the other if they need each other?
Let’s face it: Plomin, in this book Blueprint is speaking like a biological reductionist, though he may deny the claim. The claims from those who push PRS and how it can be used for precision medicine are unfounded, as there are numerous problems with the concept. Precision medicine and personalized medicine are similar concepts, though Joyner and Paneth (2015) are skeptical of its use and have seven questions for personalized medicine. Furthermore, Joyner, Boros and Fink (2018) argue that “redundant and degenerate mechanisms operating at the physiological level limit both the general utility of this assumption and the specific utility of the precision medicine narrative.” Joyner (2015: 5) also argues that “Neo-Darwinism has failed clinical medicine. By adopting a broader perspective, systems biology does not have to.”
Janssens and Joyner (2019) write that “Most [SNP] hits have no demonstrated mechanistic linkage to the biological property of interest.” Researchers can show correlations between disease phenotypes and genes, but they cannot show causation—which would be mechanistic relations between the proposed genes and the disease phenotype. Though, as Kampourakis (2017: 19), genes do not cause diseases on their own, they only contribute to its variation.
GPS are unique predictors in the behavioural sciences. They are an exception to the rule that correlations do not imply causation in the sense that there can be no backward causation when GPS are correlated with traits. That is, nothing in our brains, behaviour or environment changes inherited differences in DNA sequence. A related advantage of GPS as predictors is that they are exceptionally stable throughout the life span because they index inherited differences in DNA sequence. Although mutations can accrue in the cells used to obtain DNA, like any cells in the body these mutations would not be expected to change systematically the thousands of inherited SNPs that contribute to a GPS.
Turkheimer goes on to say that this (false) assumption by Plomin and Stumm (2018) assumes that there is no top-down causation—i.e., that phenotypes don’t cause genes, or there is no causation from the top to the bottom. (See the special issue of Interface Focus for a slew of articles on top-down causation.) Downward causation exists in biological systems (Noble, 2012, 2017), as does top-down. The very claim that “nothing in our brains, behaviour or environment changes inherited differences in DNA sequence” is ridiculous! This is something that, of course, Plomin did not discuss in Blueprint. But in a book that, supposedly, shows “how DNA makes us who we are”, why not discuss epigenetics? Plomin is confused, because DNA methylation impacts behavior and behavior impacts DNA methylation (Lerner and Overton, 2017: 114). Lerner and Overtone (2017: 145) write that:
… it should no longer be possible for any scientist to undertake the procedure of splitting of nature and nurture and, through reductionist procedures, come to conclusions that the one or the other plays a more important role in behavior and development.
Plomin’s reductionist takes, therefore again, fail. Plomin’s “reluctance” to discuss “tangential topics” to “inherited DNA differences” included epigenetics (Plomin, 2018: 12). But it seems that his “reluctance” to discuss epigenetics was a downfall in his book as epigenetic mechanisms can and do make a difference to “inherited DNA differences” (see for example, Baedke, 2018, Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics and Meloni, 2019, Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics see also Meloni, 2018). The genome can and does “react” to what occurs to the organism in the environment, so it is false that “nothing in our brains, behaviour or environment changes inherited differences in DNA sequence” (Plomin and Stumm, 2018), since our behavior and actions can and do methylate our DNA (Meloni, 2014) which falsifies Plomin’s claim and which is why he should have discussed epigenetics in Blueprint. End Edit
So the main premise of Plomin’s Blueprint is his two claims: (1) that DNA is a fortune teller and (2) that personal genomics is a fortune-telling device. He draws these big claims from PGS/PRS studies. However, over 80 percent of GWA studies have been done on European populations. And, knowing that we cannot use these datasets on other, non-European datasets, greatly hampers the uses of PGS/PRS in other populations—although the PGS/PRS are not that useful in and of itself for European populations. Plomin’s whole book is a reductionist screed—“Sure, other factors matter, but DNA matters more” is one of his main claims. Though, a priori, since there is no privileged level of causation, one cannot privilege DNA over any other developmental variables (Noble, 2012). To understand disease, we must understand the whole system and how when one part of the system becomes dysfunctional how it affects other parts of the system and how it runs. The PGS/PRS hunts are reductionist in nature, and the only answer to these reductionist paradigms are new paradigms from systems biology—one of holism.
Plomin’s assertions in his book are gleaned from highly confounded GWA studies. Plomin also assumes that we can disentangle nature and nurture—like all reductionists. Nature and nurture interact—without genes, there would be an environment, but without an environment, there would be no genes as gene expression is predicated on the environment and what occurs in it. So Plomin’s reductionist claim that “Our future is DNA” is false—our future is studying the interactive developmental system, not reducing it to a sum of its parts. Holistic biology—systems biology—beats reductionist biology—the Neo-Darwinian Modern Synthesis.
DNA is not a blueprint nor is it a fortune teller and personal genomics is not a fortune-telling device. The claim that DNA is a blueprint/fortune teller and personal genomics is a fortune-telling device come from Plomin and are derived from highly flawed GWA studies and, further, PGS/PRS. Therefore Plomin’s claim that DNA is a blueprint/fortune teller and personal genomics is a fortune-telling device are false.
(Also read Erick Turkheimer’s 2019 review of Plomin’s book The Social Science Blues, along with Steve Pitteli’s review Biogenetic Overreach for an overview and critiques of Plomin’s ideas. And read Ken Richardson’s article It’s the End of the Gene As We Know It for a critique of the concept of the gene.)
One debate in the philosophy of science is whether or not a scientific hypothesis should make testable predictions or merely explain only what it purports to explain. Should a scientific hypothesis H predict previously unknown facts of the matter or only explain an observation? Take, for example, evolutionary psychology (EP). Any EP hypothesis H can speculate on the so-called causes that led a trait to fixate in a biological population of organisms, but the claim that they can do more than that—that is, that they can generate successful predictions of previously unknown facts not used in the construction of the hypothesis—but that’s all they can do. The claim, therefore, that EP hypotheses are anything but just-so stories, is false.
Prediction and novel facts
For example, Einstein’s theory of general relativity predicted the bending of light, which was a novel prediction for the hypothesis (see pg 177-180 for predictions generated from Einstein’s theory). Fresnel’s wave theory of light predicted different infraction fringes to the prediction of the white spot—a spot which appears in a circular object’s shadow due to Fresnel diffraction (see Worrall, 1989). So Fresnel’s theory explained the diffraction and the diffraction then generated testable—and successful—novel predictions (see Magnus and Douglas, 2013). There is an example of succeful novel prediction. Ad hoc hypotheses are produced “for this” explanation—so the only evidence for the hypothesis is, for example, the existence of trait T. EP hypotheses attempt to explain the fixation of any trait T in humans, but all EP hypotheses do is explain—they generate no testable, novel predictions of previously unknown facts.
A defining feature of science and what it purports to do is to predict facts-of-the-matter which are yet to be known. John Beerbower (2016) explains this well in his book Limits of Science? (emphasis mine):
At this point, it seems appropriate to address explicitly one debate in the philosophy of science—that is, whether science can, or should try to, do more than predict consequences. One view that held considerable influence during the first half of the twentieth venture is called the predictivist thesis: that the purpose of science is to enable accurate predictions and that, in fact, science cannot actually achieve more than that. The test of an explanatory theory, therefore, is its success at prediction, at forecasting. This view need not be limited to actual predictions of future, yet to happen events; it can accommodate theories that are able to generate results that have already been observed or, if not observed, have already occurred. Of course, in such cases, care must be taken that the theory has not simply been retrofitted to the observations that have already been made—it must have some reach beyond the data used to construct the theory.
That a theory or hypothesis explains observations isn’t enough—it must generate successful predictions of novel facts. If it does not generate any novel facts-of-the-matter, then of what use is the hypothesis if it only weakly justifies the phenomenon in question? So now, what is a novel fact?
A novel fact is a fact that’s generated by hypothesis H that’s not used in the construction of the hypothesis. For example, Musgrave (1988) writes:
All of this depends, of course, on our being able to make good the intuitive distinction between prediction and novel prediction. Several competing accounts of when a prediction is a novel prediction for a theory have been produced. The one I favour, due to Elie Zahar and John Worral says that a predicted fact is a novel fact for a theory if it was not used to construct that theory — where a fact is used to construct a theory if it figures in the premises from which that theory was deduced.
Mayo (1991: 524; her emphasis) writes that a “novel fact [is] a newly discovered fact—one not known before used in testing.” So a fact is novel when it predicts a fact of the matter not used in the construction of the hypothesis—i.e., a future event. About novel predictions, Musgrave also writes that “It is only novel predictive success that is surprising, where an observed fact is novel for a theory when it was not used to construct it.” So hypothesis H entails evidence E; evidence E is not used in the construction of hypothesis H, therefore E is novel evidence for hypothesis H.
To philosopher of science Imre Lakatos, a progressive research program is one that generates novel facts, whereas a degenerating research program either fails to generate novel facts or the predictions made that were novel continue to be falsified, according to Musgrave in his article on Lakatos. We can put EP in the “degenerating research program, as no EP hypothesis generates any type of novel prediction—the only evidence for the trait is the existence of the trait.
The term “just-so stories” comes from Rudyard Kipling Just-so Stories for Little Children. Then Gould and Lewontin used the term for evolutionary hypotheses that can only explain and not predict future as-of-yet-known events. Law (2016) notes that just-so stories offer “little in the way of independent evidence to suggest that it is actually true.” Sterelny and Griffiths (1999: 61) state that just-so stories are “… an adaptive scenario, a hypothesis about what a trait’s selective history might have been and hence what its function may be.” Examples of just-so stories covered on this blog include: beards, FOXP2, cartels and Mesoamerican ritual sacrifice, Christian storytelling, just-so storytellers and their pet just-so stories, the slavery hypertension hypothesis, fear of snakes and spiders, and cold winter theory. Smith (2016: 278) has a helpful table showing ten different definitions and descriptions of just-so stories:
So the defining criterion for just-so stories is that there must be independent evidence to believe the proposed explanation for the existence of the trait. There must be independent reasons to believe a certain hypothesis, as the defining feature of a scientific hypothesis or theory is whether or not it can predict yet-to-happen events. Though, as Beerbower notes, we have to be careful that we do not retrofit the observations.
One can make an observation. Then they can work backward (what Richardson (2007) elicits is “reverse engineering”) and posit (speculate about) a good-sounding story (just-so storytelling) to explain this observation. Reverse engineering is “a process of figuring out the design of a mechanism on the basis of an analysis of the tasks it performs” (Buller, 2005: 92). Of course, the just-so storyteller can then create a story to explain the fixation of the trait in question. But that’s only (purportedly) the explanation of why the trait came to fixation for us to observe it today. There are no testable predictions of previously unknown facts. So it’s all storytelling—speculation.
The theory of natural selection is then deployed to attempt the explain the fixation of trait T in any population. It is true that a hypothesis is weakly corroborated by the existence of trait T, but what makes it a just-so story is the fact that there are no successful predictions of previously unknown facts,
When it comes to EP, one can say that the hypothesis “makes sense” and it “explains” why trait T still exists and went to fixation. However, the story only “makes sense” because there is no other way for it to be—if the story didn’t “make sense”, then the just-so storyteller wouldn’t be telling the story because it wouldn’t satisfy their aims of “proving” that a trait is an adaptation.
Smith (2016:277-278) notes 7 just-so story triggers:
1) proposing a theory-driven rather than a problem-driven explanation, 2) presenting an explanation for a change without providing a contrast for that change, 3) overlooking the limitations of evidence for distinguishing between alternative explanations (underdetermination), 4) assuming that current utility is the same as historical role, 5) misusing reverse engineering, 6) repurposing just-so stories as hypotheses rather than explanations, and 7) attempting to explain unique events that lack comparative data.
EP is most guilty of (3), (4), (5), (6), and (7). It is guilty of (3) in that it hardly ever posits other explanations for trait T, it’s always “adaptation”, as EP is an adaptationist paradigm. It is guilty of (4) perhaps the most. That trait T still exists and is useful for this today is not evidence that trait T was selected-for its use as we see it today. This then leads to (5) which is the misuse of reverse engineering. Just-so stories are ad hoc (“for this”) explanations and these types of explanations are ad hoc if there is no independent data for the hypothesis. Of course, it is guilty of (7) in that it attempts to explain, of course, unique events in human evolution. Many problems exist for evolutionary psychology (see for example Samuels, 1998; Lloyd, 1999; Prinz, 2006;), but the biggest problem is the ability of any hypothesis to generate testable, novel predictions. Smith (2016: 279) further writes that:
An important weakness in the use of narratives for scientific purposes is that the ending is known before the narrative is constructed. Merton pointed out that a “disarming characteristic” of ex post facto explanations is that they are always consistent with the observations because they are selected to be so.
Bo Winegard, in his defense of just-so storytelling, writes “that inference to the best explanation most accurately describes how science is (and ought to be) practiced. According to this description, scientists forward theories and hypotheses that are coherent, parsimonious, and fruitful.” However, as Smith (2016: 280-281) notes, that a hypothesis is “coherent”, “parsimonious” and “fruitful” (along with 11 more explanatory virtues of IBE, including depth, precision, consilience, and simplicity) is not sufficient to accept IBE—IBE is not a solution to the problems proposed by the just-so story critics as the slew of explanatory virtues do not lend evidence that T was an adaptation and thusly do not lend evidence that hypothesis H is true.
Simon (2018: 5) concludes that “(1) there is much rampant speculation in evolutionary psychology as to the reasons and the origin for certain traits being present in human beings, (2) there is circular reasoning as to a particular trait’s supposed advantage in adaptability in that a trait is chosen and reasoning works backward to subjectively “prove” its adaptive advantage, (3) the original classical theory is untestable, and most importantly, (4) there are serious doubts as to Natural Selection, i.e., selection through adaptive advantage, being the principal engine for evolution.” (1) is true since that’s all EP is—speculation. (2) is true in evolutionary psychologists notice trait T and that, since it survived today, there must be a function it performs for why natural selection “selected” the trait to propagate in species (though selection cannot select-for certain traits). (3) it is untestable in that we have no time machine to go back and watch how trait T evolved (this is where the storytelling narrative comes in: if only we had a good story to tell about the evolution of trait T). And finally, (4) is also true since natural selection is not a mechanism (see Fodor, 2008; Fodor and Piattelli-Palmarini, 2010).
EP exists in an attempt to explain so-called psychological adaptations humans have to the EEA (environment of evolutionary adaptiveness). So one looks at the current phenotype and then looks to the past in an attempt to construct a “story” which shows how a trait came to fixation. There are, furthermore, no hallmarks of adaptation. When one attempts to use selection theory to explain the fixation of trait T, they must wrestle with spandrels. Spandrels are heritable, can increase fitness, and they are selected as well—as the whole organism is selected. This also, of course, falls right back to Fodor’s (2008) argument against natural selection. Fodor (2008: 1) writes that the central claim of EP “is that heritable properties of psychological phenotypes are typically adaptations; which is to say that they are typically explained by their histories of selection.” But if “psychological phenotypes” cannot be selected, then the whole EP paradigm crumbles.
This is why EP is not scientific. It cannot make successful predictions of previously unknown facts not used in the construction of the hypothesis, it can only explain what it purports to explain. The claim, therefore, that EP hypotheses are anything but just-so stories is false. One can create good-sounding narratives for any type of trait. But that they “sound good” to the ear, and are “plausible” are not reasons to believe that the story told is true.
Are all hypotheses just-so stories? No. Since a just-so story is an ad hoc hypothesis and a hypothesis is ad hoc if it cannot be independently verified, then a hypothesis that makes predictions which can be independently verified are not just-so stories. There are hypotheses that generate no predictions, ad hoc hypotheses (where the only evidence to believe H is the existence of trait T), and hypotheses that generate novel predictions. EP is the second of these—the only evidence we have to believe H is true is that trait T exists. Independent evidence is a necessary condition of science—that is, the ability of a hypothesis to predict novel evidence is a necessary condition for science. That no EP hypothesis can generate a successful novel prediction is evidence that all EP hypotheses are just-so stories. So for the criticism to be refuted, one would have to name an EP hypothesis that is not a just-so story—that is, (1) name an EP hypothesis, (2) state the prediction, and then (3) state how the prediction follows from the hypothesis.
To be justified in believing hypothesis H in explaining how trait T became fixated in a population there must be independent evidence for this belief. The hypothesis must generate a novel fact which was previously unknown before the hypothesis was constructed. If the hypothesis cannot generate any predictions, or the predictions it makes are continuously falsified, then the hypothesis is to be rejected. No EP hypothesis can generate successful predictions of novel facts and so, the whole EP enterprise is a degenerative research program. The EP paradigm explains and accommodates, but no EP hypothesis generates independently confirmable evidence for any of its hypotheses. Therefore EP is not a scientific program and just-so stories are not scientific.
I recently bought Lamarck’s Revenge by paleobiologist Peter Ward (2018) because I went on a trip and needed something to read on the flight. I just finished the book the other day and I thought that I would give a review and also discuss Coyne’s review of the book since I know he is so uptight about epigenetic theories, like that of Denis Noble and Jablonka and Lamb. In Lamarck’s Revenge, Ward (2018) purports to show that Lamarck was right all along and that the advent of the burgeoning field is “Lamarck’s revenge” for those who—in the current day—make fun of his theories in intro biology classes. (When I took Bio 101, the professor made it a point to bring up Lamarck and giraffe necks as a “Look at this wrong theory”, nevermind the fact that Darwin was wrong too.) I will go chapter-by-chapter, give a brief synopsis of each, and then discuss Coyne’s review.
In the introduction, Ward discusses some of the problems with Darwinian thought and current biological understanding. The current neo-Darwinian Modern Synthesis states that what occurs in the lifetime of the organism cannot be passed down to further generations—that any ‘marks’ on the genome are then erased. However, recent research has shown that this is not the case. Numerous studies on plants and “simpler” organisms refute the notion, though for more “complex” organisms it has yet to be proved. However, that this discussion is even occurring is proof that we are heading in the right direction in regard to a new synthesis. In fact, Jablonka and Lamb (2005) showed in their book Evolution in Four Dimensions, that epigenetic mechanisms can and do produce rapid speciation—too quick for “normal” Darwinian evolution.
Ward (2018: 3-4) writes:
There are good times and bad times on earth, and it is proposed here that dichotomy has fueled a coupling of times when evolution has been mainly through Darwinian evolution and others when Lamarckian evolution has been dominant. Darwinian in good times, Lamarckian in bad, when bad can be defined as those times when our environments turn topsy-turvy, and do so quickly. When an asteroid hits the planet. When giant volcanic episodes create stagnant oceans. When a parent becomes a sexual predator. When our industrial output warms the world. When there are six billion humans and counting.
These examples are good—save the one about when a parent becomes a sexual predator (but if we accept the thesis that what we do and what happens to us can leave marks on our DNA that don’t change it but are passed on then it is OK)—and they all point to one thing: when the environment becomes ultra-chaotic. When such changes occur in the environment, that organism needs a physiology that is able to change on-demand to survive (see Richardson, 2017).
Ward (2018: 8) then describes Lamarck’s three-step process:
First, an animal experienced a radical change of the environment aroujnd it. Second, the initial response to the environmental change was some new kind of behavior by that of the animal (or whole species). Third, the behavioral change was followed by morphological change in subsequent generations.
Ward then discusses others before Darwin—Darwin’s grandfather Erasmus, for instance—who had theories of evolution before Darwin. In any case, we went from a world in which a God created all to a world where everything we see was created by natural processes.
Then in Chapter 2, Ward discusses Lamarck and Darwin and each of their theories in turn. (Note that Darwin did have Lamarckian views too.) Ward discusses the intellectual dual between Lamarck and Georges Cuvier, the father of the field of comparative anatomy—he studied mass extinctions. At Lamarck’s funeral, Cuvier spoke bad about Lamarck and buried his theories. (See Cuvier’s (1836) Elegy of Lamarck.) These types of arguments between academics have been going on for hundreds of years—and they will not stop any time soon.
In Chapter 3 Ward discusses Darwin’s ideas all the way to the Modern Synthesis, discussing how Darwin formulated his theory of natural selection, the purported “mechanism of evolution.” Ward discusses how Darwin at first rejected Lamarck’s ideas but then integrated them into future editions of On the Origin. We can think of this scenario: Imagine any environment and organisms in it. The environment rapidly shifts to where it is unrecognizable. The organisms in that environment then need to either change their behavior (and reproduce) or die. Now, if there were no way for organisms to change, say, their physiology (since physiology is dependent on what is occurring in the outside environment), then the species would die and there would be no evolution. However, the advent of evolved physiologies changed that. Morphologic and physiologic plasticity can and does help organisms survive in new environments—environments that are “new” to the parental organism—and this is a form of Lamarckism (“heritable epigenetics” as Ward calls it).
They studied two (so-called) different species of nautilus—one, nautilus pampilus, widespread across the Pacific and Indian Oceans and two, Nautilus stenomphalus which is only found at the Great Barrier Reef. Pompilus has a hole in the middle of its shell, whereas stenomphalus has a plug in the middle. Both of these (so-called) species have different kinds of anatomy—Pompilus has a hood covered with bumps of flesh whereas stenomphalus‘ hood is filled with projections of moss-like twig structures. So over a thirty-day period, they captured thirty nautiluses and snipped a piece of their tentacles and sequences the DNA found in it. They found that the DNA of these two morphologically different animals was the same. Thus, although the two are said to be different species based on their morphology, genetically they are the same species which leads Ward (2018: 52) to claim “that perhaps there are fewer, not more, species on Earth than science has defined.” Ward (2018: 53) cites a recent example—the fact that the Columbian and North American wooly mammoths “were genetically the same but the two had phenotypes determined by environment” (see Enk et al, 2011).
Now take Ward’s (2018: 58) definition of “heritable epigenetics”:
In heritable epigenetics, we pass on the same genome, but one marked (mark is the formal term for the place that a methyl molecule attaches to one nucleotide, a rung in the ladder of DNA) in such a way that the new organism soon has its own DNA swarmed by these new (and usually unwelcome) additions riding on the chromosomes. The genotype is not changed, but the genes carrying the new, sucker-like methyl molecules change the workings of the organism to something new, such as the production (or lack thereof) of chemicals necessary for our good health, or for how some part of the body is produced.
Chapter 5 discusses different environments in the context of evolutionary history. Environmental catastrophes that lead to the decimation of most life on the planet are the subject—something that Gould wrote about in his career (his concept of contingency in the evolutionary process). Now, going back to Lamarck’s dictum (first an environmental change, second a change in behavior, and third a change in phenotype), we can see that these kinds of processes were indeed imperative in the evolution of life on earth. Take the asteroid impact (K-Pg extinction; Cretaceous-Paleogene) that killed off the dinosaurs and threw tons of soot into the air, blocking out the sun making it effectively night (Schulte et al, 2010). All organisms that survived needed to eat. If the organism only ate in the day time, it would then need to eat at night or die. That right there is a radical environmental change (step 1) and then a change in behavior (step 2) which would eventually lead to step 3.
In Chapter 6, Ward discusses epigenetics and the origins of life. The main subject of the chapter is lateral gene transfer—the transmission of different DNA between genomes. Hundreds or thousands of new genes can be inserted into an organism and effectively change the morphology, it is a Lamarckian mechanism. Ward posits that there were many kinds of “genetic codes” and “metabolisms” throughout earth’s history, even organisms that were “alive” but were not capable of reproducing and so they were “one-offs.” Ward even describes Margulis’ (1967) theory of endosymbiosis as “a Lamarckian event“, which even Margulis accepts. Thus, the evolution of organisms is possible through lateral gene transfer and is another Lamarckian mechanism.
Chapter 7 discusses epigenetics and the Cambrian explosion. Ward cites a Creationist who claims that there has not been enough time since the 500 million year explosion to explain the diversity of body plans since then. Stephen Jay Gould wrote a whole book on this—Wonderful Life. It is true that Darwinian theory cannot explain the diversity of body plans, nor even the diversity of species and their traits (Fodor and Piatelli-Palmarini, 2010), but this does not mean that Creationism is true. If we are discussing the diversification of organismal life after mass extinctions, then Darwinian evolution cannot have possibly played a role in the survival of species—organisms with adaptive physiologies would have had a better chance of surviving in these new, chaotic environments.
It is posited here that four different epigenetic mechanisms presumably contributed to the great increase in both the kinds of species and the kinds of morphologies that distinguished them that together produced the Cambrian explosion as we currently know it: the first, now familiar, methylation; second, small RNA silencing; third, changes in the histones, the scaffolding that dictates the overall shape of a DNA molecule; and, finally, lateral gene transfer, which has recently been shown to work in animals, not just microbes. (Ward, 2018: 113)
Ginsburg and Jablonka (2010) state that “[associative] learning-based diversification was
accompanied by neurohormonal stress, which led to an ongoing destabilization and re-patterning of the epigenome, which, in turn, enabled further morphological, physiological, and behavioral diversification.” So associative learning, according to Ginsburg and Jablonka, was the driver of the Cambrian explosion. Ward (2018: 115) writes:
[The paper by Ginsburg and Jablonka] says that changes of behavior by both animal predators and animal prey began as an “arms race” in not just morphology but behavior. Learning how to hunt or flee; detecting food and mats and habitats at a distance from chemical senses of smell or vision, or from deciphering vibrations coming through water. Yet none of that would matter if the new behaviors and abilities were not passed on. As more animal body plans and the species they were composed of appeared, ecological communities changed radically and quickly. The epigenetic systems in snimals were, according to the authors, “destabilized,” andin reordering them it allowed new kinds of morphology, physiology, and again behavior, ans amid this was the ever-greater use of powerful hormone systems. Seeinf an approaching predator was not enough. The recognition of imminent danger would only save an animal’s life if its whole body was alerted and put on a “war footing” by the flooding of the creature with stress hormones. Poweful enactors of action. Over time, these systems were made heritable and, according to the authors, the novel evolution of fight or flight chemicals would have greatly enhanced survivability and success of early animals “enabled animals to exploit new niches, promoted new types of relations and arms races, and led to adaptive repsonses that became fixed through genetics.”
That, and vision. Brains, behavior, sense organs and hormones are tied to the nervous system to the digestive system. No single adaption led to animal success. It was the integration of these disparate systems into a whole that fostered survivability, and fostered the rapid evolution of new kinds of animals during the evolutionary fecund Cambrian explosion.
So, ever-changing environments are how physiological systems evolved (see Richardson, 2017: Chapters 4 and 5). Therefore, if the environment were static, then physiologies would not have evolved. Ever-changing environments were imperative to the evolution of life on earth. For if this were not the case, organisms with complex physiologies (note that a physiological system is literally a whole complex of cells) would never have evolved and we would not be here.
In chapter 8 Ward discusses epigenetic processes before and after mass extinctions. He states that, to mass extinction researchers, there are 3 ways in which mass extinction have occurred: (1) asteroid or comet impact; (2) greenhouse mass extinction events; and (3) glaciation extinction events. So these mass extinctions caused the emergence of body plans and new species—brought on by epigenetic mechanisms.
Chapter 9 discusses good and bad times in human history—and the epigenetic changes that may have occurred. Ward (2018: 149) discusses the Toba eruption and that “some small group of survivors underwent a behavioral change that became heritable, producing cultural change that is difficult to overstate.” Environmental change leads to behavioral change which eventually leads to change in morphology, as Lamarck said, and mass extinction events are the perfect way to show what Lamarck was saying.
In chapter 10 Ward discusses epigenetics and violence, the star of the chapter being MAOA. Take this example from Ward (2018: 167-168):
Causing violent death or escaping violent death or simply being subjected to intense violence causes significant flooding of the body with a whole pharmacological medicine chest of proteins, and in so doing changes the chemical state of virtually every cell. The produces epigenetic change(s) that can, depending on the individual, create a newly heritable state that is passed on to the offspring. The epigenetic change caused by the fight-or-flight response may cause progeny to be more susceptible to causing violence.
Ward then discsses MAOA (pg 168-170), though read my thoughts on the matter. (He discusses the role of epigenetics in the “turning on” of the gene. Child abuse has been shown to cause epigenetic changes in the brain (Zannas et al, 2015). (It’s notable that Ward—rightly—in this chapter dispenses with the nature vs. nurture argument.)
In Chapter 11, Ward discusses food and famine changing our DNA. He cites the most popular example, that of the studies done on survivors who bore children during or after the famine. (I have discussed this at length.) In September of 1944, the Dutch ordered a nation-wide railroad strike. The Germans then restricted food and medical access to the country causing the deaths of some 20,000 people and harming millions more. So those who were in the womb during the famine had higher rates of disorders such as obesity, anorexia, obesity, and cardiovascular incidences.
However, one study showed that if one’s father had little access to food during the slow growth period, then cardiovascular disease mortality was low. But diabetes mortality was high when the paternal grandfather was exposed to excess food. Further, when SES factors were controlled for, the difference in lifespan was 32 years, which was dependent on whether or not the grandfather was exposed to an overabundance of food or lack of abundance of food just before puberty.
Nutrition can alter the epigenome (Zhang and Kutateladze, 2018), since it can alter the epigenome and the epigenome is heritable, then these changes can be passed on to future generations too.
Ward then discusses the microbiome and epigenetics (read my article for a primer on the microbiome, what it does, and racial differences in it). The microbiome has been called “the second genome” (Grice and Segre, 2012), and so, any changes to the “second genome” can also be passed down to subsequent generations.
In Chapter 12, Ward discusses epigenetics and pandemics. Seeing people die from horrible diseases of course has horrible effects on people. Yes, there were evolutionary implications from these pandemics in that the gene pool was decreased—but what of the effects on the survivors? Methylation impacts behavior and behavior impacts methylation (Lerner and Overton, 2017), and so, differing behaviors after such atrocities can be tagged on the epigenome.
Ward then takes the discussion on pandemics and death and shifts to religion. Imagine seeing your children die, would you not want to believe that there was a better place for them after death to—somewhat—quell your sorrow over their loss? Of course, having an epiphany about something (anything, not just religon) can change how you view life. Ward also discusses a study where atheists had different brain regions activated even while no stimulation was presented. (I don’t like brain imaging studies, see William Uttal’s books and papers.) Ward also discusses the VMAT2 gene, which “controls” mood through the production of the VMAT protein, elevating hormones such as dopamine and serotonin (similar to taking numerous illegal drugs).
Then in Chapter 13 he discusses chemicals and toxins and how they relate to epigenetic processes. These kinds of chemicals and toxins are linked with changes in DNA methylation, miroRNAs, and histone modifications (Hou et al, 2012). (Also see Tiffon, 2018 for more on chemicals and how they affect the epigenome.)
Finally, in Chapter 14 Ward discusses the future of evolution in a world with CRISPR-CAS9. He discusses many ways in which the technology can be useful to us. He discusses one study in which Chinese scientists knocked out the myostatin gene in 65 dog embryos. Twenty-seven of the dogs were born and only two—a male and a female—had both copies of the myostatin gene disrupted. This is just like when researchers made “double-muscle” cattle. See my article ‘Double-Muscled’ Humans?
He then discusses the possibility of “supersoldiers” and if we can engineer humans to be emotionless killing machines. Imagine being able to engineer humans that had no sympathy, no empathy, that looked just like you and I. CRISPR is a tool that uses epigenetic processes and, thus, we can say that CRISPR is a man-made Lamarckian mechanism of genetic change (mimicking lateral gene transfer).
Now, let’s quickly discuss Coyne’s review before I give my thoughts on the book. He criticizes Ward’s article linked above (Coyne admits he did not read the book), stating that his claim that the two nautiluses discussed above being the same species with the same genome and epigenetic forces leading to differences in morphology (phenotype). Take Coyne’s critique of Vandepas, et al, 2016—that they only sequenced two mitochondrial genes. Combosch et al (2017; of which Ward was a coauthor) write (my emphasis):
Moreover, previous molecular phylogenetic studies indicate major problems with the conchiological species boundaries and concluded that Nautilus represents three geographically distinct clades with poorly circumscribed species (Bonacum et al, 2011; Ward et al, 2016). This is been reiterated in a more recent study (Vandepas et al, 2016), which concluded that N. pompilius is a morphologically variable species and most other species may not be valid. However, these studies were predominantly or exclusively based on mitochondrial DNA (mtDNA), an informative but often misleading marker for phylogenetic inference (e.g., Stöger & Schrödl 2013) which cannot reliably confirm and/or resolve the genetic composition of putative hybrid specimens (Wray et al, 1995).
Looks like Coyne did not look hard enough for more studies on the matter. In any case, it’s not just Ward that makes this argument—many other researchers do (see e.g., Tajika et al, 2018). So, if there is no genetic difference between these two (so-called) species, and they have morphological differences, then the possibility that seems likely is that the differences in morphology are environmentally-driven.
Lastly, Coyne was critical of Ward’s thoughts on the heritability of histone modification, DNA methylation, etc. It seems that Coyne has not read the work of philosopher Jan Baedke (see his Google Scholar page), specifically his book Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics along with the work of sociologist Maurizio Meloni (see his Google Scholar page), specifically his book Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics. If he did, Coyne would then see that his rebuttal to Ward makes no sense as Baedke discusses epigenetics from an evolutionary perspective and Meloni discusses epigenetics through a social, human perspective and what can—and does—occur in regard to epigenetic processes in humans.
Coyne did discuss Noble’s views on epigenetics and evolution—and Noble responded in one of his talks. However, it seems like Coyne is not aware of the work of Baedke and Meloni—I wonder what he’d say about their work? Anything that attacks the neo-Darwinian Modern Synthesis gets under Coyne’s skin—almost as if it is a religion for him.
Did I like the book? I thought it was good. Out of 5 stars, I give it 3. He got some things wrong, For instance, I asked Shea Robinson, author of Epigenetics and Public Policy: The Tangled Web of Science and Politics about the beginning of the book and he directed me to two articles on his website: Lamarck’s Actual Lamarckism (or How Contemporary Epigenetics is not Lamarckian) and The Unfortunate Legacy of Jean-Baptiste Lamarck. The beginning of the book is rocky, the middle is good (discussing the Cambrian explosion) and the end is alright. The strength of the book is how Ward discusses the processes that epigenetics occurs by and how epigenetic processes can occur—and help drive—evolutionary change, just as Jablonka and Lamb (1995, 2005) argue, along with Baedke (2018). The book is a great read, if only for the history of epigenetics (which Robinson (2018) goes into more depth, as does Baedke (2018) and Meloni (2019)).
Lamarck’s Revenge is a welcome addition to the slew of books and articles that go against the Modern Synthesis and should be required reading for those interested in the history of biology and evolution.
Hoffman et al (2016) questioned laypeople and medical students and residents on a 15-question questionnaire regarding different beliefs people have about racial differences. The point of the questionnaire was to ascertain how people are biased in regard to racial differences in pain and how the bias affects the treatment the individual of the certain racial group. Only two of the questions had anything to do with pain. In this article, I will answer the questions one by one.
1. On average, Blacks age more slowly than Whites.
This one is true (though they rate this question as false). I don’t know why, though, because there are differences between black and white skin and these differences affect the rate of aging between races.
Campiche et al (2019) found that there is a difference in aging regarding skin in different ethnies (the cohorts were French and Mauritanian). The average age was 46 for the French and 56 foe the Mauritanians, and the Mauritanians still looked younger! Campiche et al (2019) write:
The difference in age between our Caucasian and Black African cohorts (median age 46 years vs 56 years) could bring into question the comparisons of the two cohorts. Nevertheless, we mostly found that Caucasians displayed more severe signs of aging than Black Africans which is in line with the common understanding that the onset of aging in fair skin starts earlier than in darkly pigmented skin and that there were differences in the appearance of lip lines and facial pores.
This question is true, contrary to the claims of Hoffman et al (2016).
2. Black people’s nerve-endings are less sensitive than White people’s nerve-endings.
I can find no literature on this matter and the only articles point me to Hoffman et al (2016) and different articles on the matter. I accept the claim as false.
3. Black people’s blood coagulates more quickly–because of that, Blacks have a lower rate of hemophilia than Whites.
Blacks’ blood does clot faster than whites, and part of the cause is differences in the PAR4 gene family (Bray et al, 2013). The reason that blacks’ blood clots faster than whites’ is due to the effects of thrombin, an enzyme that activates the molecule responsible for blood clotting. Blacks do have a lower rate of hemophilia than whites, though, but not by much (13.2 cases/100,000 for whites compared to 11 for blacks) (Soucie, Evatt, and Jackson, 1998). The question is true, contra Hoffman et al (2016).
4. Whites, on average, have larger brains than Blacks.
They stated that this question is false, which is bizarre. I am aware of no literature that attests to the claim that whites do not have larger brains than blacks. Many analyses back the claim that whites have larger brains than blacks (though Nisbett disagrees and states that there are studies that show the contrary but does not leave a citation) (Rushton, 1997). (Though see Race and Brain Size: Blacks Have Bigger Brains for an alternate view.)
5. Whites are less susceptible to heart disease like hypertension than Blacks.
They say this claim is true. And it is. Hypertension (high blood pressure) is a physiological variable which means that social environment can greatly affect it (Williams, 1992). Higher rates of obesity drive this association as well. American blacks have a lower rate of CHD than whites (7.2 compared to 7.8) but this is reversed for women (7.0 compared to 4.6) (Leigh, Alvarez, and Rodriguez, 2016). The CDC, though, says that the rate of heart disease is the same between blacks and whites, at 23.8 percent though (slightly higher than the 23.5 percent average).
6. Blacks are less likely to contract spinal cord diseases like multiple sclerosis.
7. Whites have a better sense of hearing compared with Blacks.
They state that this claim is false. Pratt et al (2009) state that hearing loss is more likely to occur in white over black elderly patients.
8. Black people’s skin has more collagen (i.e., it’s thicker) than White people’s skin.
They state that this claim is false, and it is. That there is no difference in skin thickness between blacks and whites is irrelevant, though. Black skin is more compact, with greater intercellular cohesion (LaRuche and Cesarini, 1992; Rawlings, 2006).
9. Blacks, on average, have denser, stronger bones than Whites.
10. Blacks have a more sensitive sense of smell than Whites; they can differentiate odors and detect faint smells better than Whites.
This claim is false, according to Hofmann et al. And I can find nothing in the literature on the matter so I will accept their claim.
11. Whites have more efficient respiratory systems than Blacks.
They state that this claim is false. However, Schwartz et al (1988) state that “Controlling for sex, age, standing height, and body mass index, blacks had consistently lower levels of lung function for most measures.” This claim seems to be true.
12. Black couples are significantly more fertile than White couples.
They state this claim is false. Wellons et al (2008) state that “black women were more likely to have experienced infertility.” So the claim is in the opposite of what Hoffman et al question.
13. Whites are less likely to have a stroke than Blacks.
They state that this claim is true, and it is. Minorities are more likely to have a stroke than whites. Brevata et al (2005) write that blacks are more likely to have severe strokes than whites. The claim is true.
14. Blacks are better at detecting movement than Whites.
This seems like a bizarre claim. They state that it is false and I will accept it as false since I can find no literature on the matter.
15. Blacks have stronger immune systems than Whites and are less likely to contract colds.
Europeans and Africans have different immune systems. The immune system of black Americans is stronger than whites’. Twenty-four hours after being infected with salmonella and listeria bacteria, researchers found that the white blood cells from black Americans responded quicker than that of the white blood cells from white Americans. The white blood cells from black Americans ridded the infection about three times quicker than the white blood cells from black Americans. They stated that this claim is false, but it appears to be true.
So, by my count, out of the 15 questions asked, 8 of them have a factual basis (with some in the opposite direction), compared to Hoffman et al’s (2016) assertion that only 4 of them are true. In any case, there are a lot of myths about racial differences out there, and some of these questions by Hoffman et al are myths. Though some of them do have a factual basis. I wonder what kind of literature they referred to when asking these questions, because the literature that I am aware of when it comes to some of these matters is different compared to what Hoffman et al (2016) claim. Racial/ethnic differences do, obviously, exist but there are many myths involved with them.
Helmuth Nyborg published an article in Psych titled Race as Social Construct (Nyborg, 2019). In the article, he responds to a National Geographic article There’s No Scientific Basis for Race—It’s a Made-Up Label. In the article, Nyborg quotes, what apparently are quotations, from the article. Yet, for example when it comes to this:
‘There’s No Scientific Basis for Race’—‘It’s a Made-Up Label’… ‘Races do not exist because we are equals’, ‘the concept of race is not grounded in genetics’, etc.
The second quote “Races do not exist because we are equals” is not in the article. (Though this is probably a general call-out to so-called “social constructivists about race.”) Now, I won’t’ nit-pick about it, since he is apparently speaking to his critics who make these claims. In any case, Nyborg’s article is titled Race as Social Construct. Where are constructivists about race said to be anti-realists or eliminativists about race? If Nyborg is really speaking to constructivists about race, then he’s strawmanning their position. Because social constructivists about race are realists about race.
Take the new AAPA Statement on Race and Racism, where they write:
… race has become a social reality that structures societies and how we experience the world. In this regard, race is real, as is racism, and both have real biological consequences.
““race” as a social reality — as a way of structuring societies and experiencing the world — is very real.”
So, if constructivists about race claim that “Races do not exist”, then why are social constructivists about race literally saying “race is real” and “”race” as a social reality … is very real”? Weird… Almost as if Nyborg is strawmanning the constructivist position. Nyborg asks if “NG also think[s] of species as a social construct?“. See Elstein (2003) for a view that species are socially constructed. In any case, I don’t think that Nyborg is familiar with the philosophical literature on the status of species.
Here is Nyborg’s first syllogism:
Samuel Morton is a reprehensible model racist with a fixed defintion of race.
1. Samuel Morton is the father of scientific racism.
2. (We “know” that the father of scientific racism has THE correct understanding of race).
3. Morton thinks that races represent separate acts of creation.
4. Morton thinks that races are ranked in a divine hierarchy.
5. Morton did not think that races were closely related.
6. Morton thinks that races has distinct characters which:
(a) Are immutable or “fixed” across generations (i.e., no transmutation, aka evolution).
(b) Are homogenous of “fixed” (in these senses of fixation) across individuals within races.
Morton is wrong about 3-6, and thus represent the opposite of reality. We can then say, given 1-2 and 3-6, that races do not exist.
This is ridiculous. Where has anyone written anything like this, that since Morton was a “racist” that “races do not exist”? Did Gould make that claim in Mismeasure? I personally think that Morton’s analysis was flawed by his own biases, but I do not make the claim that “races do not exist” because of it.
In any case, when it comes to Gould’s critique of Morton’s skulls, contra Jensen (1982), Rushton (1997) , and Lewis et al (2011), Gould’s arguments about Morton were largely correct (Weisberg, 2014; Kaplan, Pigliucci, and Banta, 2015; Weisberg and Paul, 2016). Specifically, Weisberg (2014) writes that “Although Gould made some errors
and overstated his case in a number of places, he provided prima facia evidence, as yet unrefuted, that Morton did indeed mismeasure his skulls in ways that conformed to 19th century racial biases.”
Now when it comes to this one, we’re getting somewhere:
Race does not Relate to Geographic Location
1. There are no fixed traits with specific geographic locations …” because …
2. “… as often as isolation has created differences among populations, migration and mixing have blurred or erased them.
3. “… our pictures of past ‘racial structures’ are almost always wrong” and harmful.
This is a good argument. However, it fails, in my opinion. Yes, there is no sharp delineation in traits between what are purported to be racial groupings. However, for biological racial realism to be true, there do not need to be. Take my article You Don’t Need Genes to Delineate Race. By looking at average facial and morphological features that exist in any continent, we can say that, although there is no sharp gradation and there are clines in phenotypes, that does not mean that there is no what we can say “average look” for the group. (Nyborg discusses “IQ” there, but I won’t get into it.)
Now take this one:
Races do not exist: We are Equals and Africans
1. “… all humans are closely related.”
2. In a very real sense, all people alive today are Africans.”
3. Genetic diversity in Africa is much larger than outside this continent.”
4. Because they [migrants] were just a small subset of Africa’s population, the migrants took with them only a fraction of its genetic diversity.”
5. Admittedly, “… the longer two groups are separated, the more distinctive tweaks [mutations] they will acquire”, BUT …
6. “The concept of race has no genetic or scientific basis.” (NG here refers to a Craig Venter statement at a White House meeting, June 2000; see later).
7. “Science tells us there is no genetic or scientific basis for race. Races do not exist because we are [all] equals.”
1-5 are true; though 6-7 are false. In any case, the existence of race is not a scientific matter. The questions “What is race?”, “Is race real?”, and “If race is real, how many races are there?” are philosophical, not scientific, matters. Nyborg brings up “Lewontin’s fallacy”, but take what Hardimon (2017: 22-23) writes about the matter:
It is worth noting that the force of the argument against the existence of racialist races from Lewontin’s data analysis is unaffected by the critique A.W.F. Edwards made in his 2003 paper “Human Genetic Diversity: Lewontin’s Fallacy.” The fallacy Edwards imputes to Lewontin consists in inferring that racial classification has no taxonomic signifigance from the finding that the between-race component of human genetic diversity is very small. The inference is fallacious because the fact that the between-race component of human genetic diversity is small does not entail that racial classification has no taxonomic signifigance. Lewontin’s locus-by-locus analysis (which does not consider the possibility of a correlation between individual loci) does not preclude the possibility that individual loci might be correlated in such a way that people could be grouped into traditional racial categories. The underlying though is that racial classification would have “taxonomic signifigance” were it possible to group people into traditonal racial categories by making use of correlations between individual loci. However, Lewontin’s argument that there are no racialist races because the component of within-race genetic variation is larger than the component of between-race genetic variation is untouched by Edwards’ objection. That conclusion rests solely on Lewontin’s statistical analysis of human variation (the validity of which Edwards grants) and does not pressupose the absence of correlational structure in the genetic data. In short, Lewontin’s data do not preclude the possibility that raciual classification might have taxonomic signifigance but they do preclude the possibility that racialist races exist.
Nyborg is, obviously, pushing the concept of racialist races, though Hardimon has shown that they do not exist. Nyborg says that “Educability and IQ are arguable [sic] physiological (Spearman, 1927)“. Nope.
Nyborg then presents his next syllogism:
Admixture and Displacement Have Erased All Race Differences
1. Race implies unadmixed groups between which there are fixed—“fix”, in the sense of fixation index—traits.
2. (From Reich (2018) race implies “primeval” groups…separated tens of thousands of years ago”.
3. Genetics shows that mixture and displacement have happened again and again”… and … as a result “Differences have been blurred or erased”.
4. Thus, “there are no fixed traits associated with specific geographic locations…”
5. And “our pictures of past ‘racial structures’ are almost always wrong” and harmful.
Since human descent groups are mixed and do not exhibit fixed trait differences and since there are no 10-thousand-year-old primeval groups, there are no races.
This one is strong, and if an eliminativist/anti-realist about race were to use this argument (remember, Nyborg doesn’t understand that social constructivists are realists about race), then it would be strong. But that human populations are mixed and do not exhibit trait differences does not mean that race does not exist, that does not follow. That is a carry-over from the racialist concept of race, which is false.
Nyborg then presents his fifth syllogism:
Race is only Skin Color Deep
1. “When people speak about race, usually they seem to be referring to skin color and, at the same time, to something more than skin color.”
2. “This is the legacy of people such as Morton, who developed the “science” of race to suit his own prejudices and got the actual science totally wrong.”
3. “Science today tells us that the visible differences between people a re accidents of history. They reflect how our ancestors dealt with sun exposure, and not much else.”
4. There is no homogenous African race.
Since race is only based on skin color, it is made up by racists.
I have heard an argument similar to this, and it fails. Race isn’t ONLY BASED ON skin color, but it is a marker of race, along with ancestry and location. Of course, morphology and other phenotypic traits ground the scientific concept of “race” (minimalist/populationist race). Race, of course, does not mean only skin color, there are many other ways to delineate races, with skin color being but one tell.
Nyborg then writes:
Ducrest, Keller, and Rouling, 2008)  thus observed that darker color is associated with greater aggressiveness in 10 mammal species, three kinds of birds, and more Lizard forms entirely evaded them. They condemned the color analogue with respect to humans, and reacted forcefully when Rushton and Templer (2009)  drew data from no less than 113 countries and found that “… murder, rape, and serious assault were associated with darker skin color, lower IQ, higher birth rate, higher infant mortality, higher HIV/AIDS rate, lower life expectancy, and lower income”
Yea, Ducrest, Keller, and Rouling (2008) is one study that Rushton loved, as it, supposedly, gave a basis for darker color being associated with aggressiveness in a slew of different animals. I rebutted Rushton and Templer, in any case. Their study was ridiculous and they did not even heed what Ducrest, Keller, and Rouling (2008: 507) stated “… that human populations are therefore not expected to consistently exhibit the associations between melanin-based coloration and the physiological and behavioural traits reported in our study.” Must be hard for Rushton and Templer to read.
In sum, Nyborg is wrong that racial constructivists claim that “Races do not exist”, for if they did not exist, then what would constructivists be fighting for? Nyborg seems to be talking to anti-realists/eliminativists about race. Nyborg pushes a racialist concept about race, which was refuted by Hardimon (2017: Chapter 1). Races exist in a minimal sense (Hardimon, 2017) and U.S. sense (Spencer, 2014), but not in the racialist “HBD” sense. In this case, biological racial realism (Spencer, 2011) is true, but if we are going by Kaplan and Winther’s (2014) definitions, Rushton, Jensen, Lynn, and Nyborg would be the biological racial realists, whereas myself, Hardimon and Spencer would be biogenomic/cluster race realists. It seems that Nyborg needs to brush up on the philosophical literature, because what he claims that social constructivists about race believe are not true; he just strawmanned their position. In any case, I’ve shown that constructivists about race do not believe that race is not real. They may not believe that race is real in a biological manner, but they do in a social one, and that is enough for them to be race realists and believe that race exists.
(Also note how Kaplan and Winther (2014) note that “Social racial realism defends the existence of distinct human groups in our ordinary discourse and social interactions. Such groups are often identified and stabilized by “surface” factors such as skin color or facial features.” So, again, those who push a socialrace-type concept do not deny that race exists, on the contrary, they are realists about race. Nyborg got it wrong, and some of his critiques are good against those who deny the reality of race, but his racial ontology is false.
As a bearded man, this has to be one of my favorite just-so stories. In Descent of Man, Darwin spoke quite a bit about the beard, and the different races/ethnies and the distribution of beards in them. Darwin (1871: 581) wrote:
On the other hand, bearded races admire and greatly value their beards ; among the Anglo-Saxons every part of the body had a recognised value … [also writing on pg 603] … for we know that with savages, the men of the beardless races take infinite pains in eradicating every hair from their faces as something odious, whilst the men of the bearded races feel the greatest pride in their beards.
Any man who has grown a beard admires and greatly values their beard. Darwin noted that beards were scant in Asian and Native American populations, as well as in Africa.
“Darwin specifically cited the human beard as a response to sexual selection serving mate attraction.” [Psychology Today, Beauty and the Beard]
Now, going off of this, we have this just-so story.
Men with the best beards attracted the most mates. Men who attracted the most mates had the most children. Men who had the most children had the best beards therefore, “beard genes” were naturally selected and eventually became fixated in males.
This has to be my favorite just-so story.
The New Republic almost word-for-word states my just-so story provided above:
Over the millennia, the theory goes, beardedmen were more successful in procreation than their smoother competitors, and the human beardevolved into its present form.
With so many so-called adaptations and things serving as “mate attractors”, how can selection “know” which trait to “act on”? This shows the ridiculousness of the just-so storytellers main weapon: reverse engineering.
Reverse engineering is “the inference from function to cause” (Richardson, 2007: 51). So they take the function (beards are seen as attractive to women) and then infer the cause (men with the best, most attractive beards were selected by women and thus “beard genes” became fixated in males since the best beards were fitness signallers to women since they found them attractive). But there is a flaw with reverse engineering: when accounting for such “design” in nature, in terms of adaptation, it can lead to just-so storytelling. AKA ad hoc hypotheses that explain what they purport to explain and only what they purport to explain.
Reverse engineering uses the design feature (beards) then extrapolate backward to presume its function (men with beards were seen as more attractive and thus, ‘beard genes’ became fixated since that’s what women found attractive). Reverse engineering can be used to make any story sound coherent; they necessarily conform with the data. Indeed, as Smith (2016: 279; emphasis mine) writes:
An important weakness in the use of narratives for scientific purposes is that the ending is known before the narrative is constructed. Merton42 pointed out that a “disarming characteristic” of ex post facto explanations is that they are always consistent with the observations because they are selected to be so.
This is why it’s problematic to accept this type of storytelling: they are consistent with the observations because they are crafted that way, but there is no observation that would increase the probability of the hypothesis being true so it’s just storytelling. In lieu of a time machine, we cannot verify these types of stories.
So we notice that beards (obviously) exist. Then we work backward and infer the function from the already-known outcome (as seen above). We then create the story that they were seen as attractive to women, thus they have been selected by women since they make for a more attractive mate. That a hypothesis conforms with the observation is not evidence that the hypothesis in question is true.
Research exists that men with beards are seen as more formidable mates for long-term relationships (Dixson et al, 2016). Mcintosh et al (2017) reported that “Women preferred full beards over clean-shaven faces and masculinised over feminised faces.” However, there is data that men with beards are perceived as more dominant but are not seen as more attractive than clean-shaven faces (Muscarella and Cunningham, 1996; Neave and Shields, 2008) Though evidence for current adaptiveness is not evidence for evolutionary adaptiveness. The fact that X is adaptive today is not evidence that X was adaptive in the human OEE (original evolutionary environment). This again goes back to reverse engineering. Attempting to account for design in nature amounts to nothing but storytelling since the hypotheses are inherently ad hoc. Evolutionary functional analysis most definitely leads to just-so storytelling, and the story of how and why men have beards falls prey to this as well.
Men’s beards may be the result of women’s choice, as X may be the reason for Y, but without a way to independently verify the hypothesis in question, it is then a just-so story. No, showing that women find men with beards to be better long-term partners is not evidence that female choice in the OEE (or in any point in evolutionary history) is evidence for the claim that beards became fixated in males due to female choice—sexual selection.
The problem with adaptationist explanations is that other modes of evolution are disregarded (nevermind the fact that the supposed mode of adaptation is supposed to be natural selection for the specific fitness-enhancing trait which cannot occur since NS has no mind and there are no laws of selection for trait fixation; Fodor, 2008; Fodor and Piatteli-Palmarini, 2010; also see Replies to Our Critics) such as drift, mutation, and migration (see Gould and Lewontin, 1979; also see Simon, 2018 for other critiques of EP hypotheses, namely that they are not testable).
In sum, the story that beards were sexually selected-for by women due to X (no matter what X is, be it attractiveness, dominance, etc) is a just-so story. It cannot be independently verified. Yes, beards are seen as status symbols and there is a considerable amount of research on men’s beards and what women think they mean today, but, again, that X is adaptive today is not evidence that X was adaptive in an evolutionary context. Simplicity, coherence, and fruitfulness are no reasons to believe a hypothesis (Smith, 2016); the hypothesis must be independently testable but there is no way to test this hypothesis—or any other—that trait X (beards, in this case) moved to fixation in virtue of its effect on reproductive fitness, therefore it is a just-so story.