NotPoliticallyCorrect

Home » Articles posted by RaceRealist (Page 18)

Author Archives: RaceRealist

Five Years Away Is Always Five Years Away

1300 words

Five years away is always five years away. When one makes such a claim, they can always fall back on the “just wait five more years!” canard. Charles Murray is one who makes such claims. In an interview with the editor of Skeptic Magazine, Murray stated to Frank Miele:

I have confidence that in five years from now, and thereafter, this book will be seen as a major accomplishment.

This interview was in 1996 (after the release of the soft cover edition of The Bell Curve), and so “five years” would be 2001. But “predictions” such as this from HBDers (that the next big thing for their ideology, for example) is only X years away happens a lot. I’ve seen many HBDers make claims that only in 5 to 10 years the evidence for their position will come out. Such claims seem strangely religious to me. There is a reason for that. (See Conley and Domingue, 2016 for a molecular genetic refutation of The Bell Curve. While Murray’s prediction failed, 22 years after The Bell Curve’s publication, the claims of Murray and Herrnstein were refuted.)

Numerous people throughout history have made predictions regarding the date of Christ’s return. Some have used calculations to ascertain the date of Christ’s return, from the Bible. We can just take a look at the Wikipedia page for predictions and claims for the second coming of Christ where there are many (obviously failed) predictions of His return.

Take John Wesley’s claim that Revelations 12:14 referred to the day that Christ should come. Or one of Charles Taze Russell’s (the first president of the Watch Tower Society of Jehova’s Witnesses) claim that Jesus would return in 1874 and be ruling invisibly from heaven.

Russell’s beliefs began with Adventist teachings. While Russell, at first, did not take to the claim that Christ’s return could be predicted, that changed when he met Adventist author Nelson Barbour. The Adventists taught that the End Times began in 1799, Christ returned invisibly in 1874 with a physical return in 1878. (When this did not come to pass, many followers left Barbour and Russell states that Barbour did not get the event wrong, he just got the fate wrong.) So all Christians that died before 1874 would be resurrected, and Armageddon would begin in 1914. Since WWI began in 1914, Russell took that as evidence that his prediction was coming to pass. So Russell sold his clothing stores, worth millions of dollars today, and began writing and preaching about Christ’s imminent refuted. This doesn’t need to be said, but the predictions obviously failed.

So the date of 1914 for Armageddon (when Christ is supposed to return), was come to by Russell from studying the Bible and the great pyramids:

A key component to the calculation was derived from the book of Daniel, Chapter 4. The book refers to “seven times“. He interpreted each “time” as equal to 360 days, giving a total of 2,520 days. He further interpreted this as representing exactly 2,520 years, measured from the starting date of 607 BCE. This resulted in the year 1914-OCT being the target date for the Millennium.

Here is the prediction in Russell’s words “…we consider it an established truth that the final end of the kingdoms of this world, and the full establishment of the Kingdom of God, will be accomplished by the end of A.D. 1914” (1889). When 1914 came and went (sans the beginning of WWI which he took to be a sign of the, End Times), Russell changed his view.

Now, we can liken the Russell situation to Murray. Murray claimed that in 5 years after his book’s publication, that the “book would be seen as a major accomplishment.” Murray also made a similar claim back in 2016. Someone wrote to evolutionary biologist Joseph Graves about a talk Murray gave; he was offered an opportunity to debate Graves about his claims. Graves stated (my emphasis):

After his talk I offered him an opportunity to debate me on his claims at/in any venue of his choosing. He refused again, stating he would agree after another five years. The five years are in the hope of the appearance of better genomic studies to buttress his claims. In my talk I pointed out the utter weakness of the current genomic studies of intelligence and any attempt to associate racial differences in measured intelligence to genomic variants.

(Do note that this was back in April of 2016, about one year before I changed my hereditarian views to that of DST. I emailed Murray about this, he responded to me, and gave me permission to post his reply which you can read at the above link.)

Emil Kirkegaard stated on Twitter:

Do you wanna bet that future genomics studies will vindicate us? Ashkenazim intelligence is higher for mostly genetic reasons. Probably someone will publish mixed-ethnic GWAS for EA/IQ within a few years

Notice, though “within a few years” is vague; though I would take that to be, as Kirkegaard states next, three years. Kirkegaard was much more specific for PGS (polygenic scores) and Ashkenazi Jews, stating that “causal variant polygenic scores will show alignment with phenotypic gaps for IQ eg in 3 years time.” I’ll remember this; January 6th, 2022. (Though it was just an “example given”, this is a good example of a prediction from an HBDer.) Nevermind the problems with PGS/GWA studies (Richardson, 2017; Janssens and Joyner, 2019; Richardson and Jones, 2019).

I can see a prediction being made, it not coming to pass, and, just like Russel, one stating “No!! X, Y, and Z happened so that invalidated the prediction! The new one is X time away!” Being vague about timetables about as-of-yet-to-occur events it dishonest; stick to the claim, and if it does not occur….stop holding the view, just as Russel did. However, people like Murray won’t change their views; they’re too entrenched in this. Most may know that I over two years ago I changed my views on hereditarianism (which “is the doctrine or school of thought that heredity plays a significant role in determining human nature and character traits, such as intelligence and personality“) due to two books: DNA Is Not Destiny: The Remarkable, Completely Misunderstood Relationship between You and Your Genes and Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. But I may just be a special case here.

Genes, Brains, and Human Potential then led me to the work of Jablonka and Lamb, Denis Noble, David Moore, Robert Lickliter, and others—the developmental systems theorists. DST is completely at-ends with the main “field” of “HBD”: behavioral genetics. See Griffiths and Tabery (2013) for why teasing apart genes and environment—nature and nurture—is problematic.

In any case, five years away is always five years away, especially with HBDers. That magic evidence is always “right around the corner”, despite the fact that none ever comes. I know that some HBDers will probably clamor that I’m wrong and that Murray or another “HBDer” has made a successful prediction and not immediately change the date of said prediction. But, just like Charles Taze Russell, when the prediction does not come to pass, just make something up about how and why the prediction didn’t come to pass and everything should be fine.

I think Charles Murray should change his name to Charles Taze Russel, since he pushed back the date of the prediction so many times. Though, to Russel’s credit, he did eventually recant on his views. I would find it hard to believe that Murray would; he’s too deep in this game and his career writing books and being an AEI pundit is on the line.

So I strongly doubt that Murray would ever come outright and say “I was wrong.” Too much money is on the line for him. (Note that Murray has a new book releasing in January titled Human Diversity: Gender, Race, Class, and Genes and you know that I will give a scathing review of it, since I already know Murray’s MO.) It’s ironic to me: Most HBDers are pretty religious in their convictions and can and will explain away data that doesn’t line up with their beliefs, just like a theist.

Men Are Stronger Than Women

1200 words

The claim that “Men are stronger than women” does not need to be said—it is obvious through observation that men are stronger than women. To my (non-)surprise, I saw someone on Twitter state:

“I keep hearing that the sex basis of patriarchy is inevitable because men are (on average) stronger. Notwithstanding that part of this literally results from women in all stages of life being denied access to and discourage from physical activity, there’s other stuff to note.”

To which I replied:

“I don’t follow – are you claiming that if women were encouraged to be physically active that women (the population) can be anywhere *near* men’s (the population) strength level?”

I then got told to “Fuck off,” because I’m a “racist” (due to the handle I use and my views on the reality of race). In any case, while it is true that part of this difference does, in part, stem from cultural differences (think of women wanting the “toned” look and not wanting to get “big and bulky”—as if it happens overnight) and not wanting to lift heavy weights because they think they will become cartoonish.

Here’s the thing though: Men have about 61 percent more muscle mass than women (which is attributed to higher levels of testosterone); most of the muscle mass difference is allocated to the upper body—men have about 75 percent more arm muscle mass than women which accounts for 90 percent greater upper body strength in men. Men also have about 50 percent more muscle mass than women, while this higher percentage of muscle mass is then related to men’s 65 percent greater lower body strength (see references in Lassek and Gaulin, 2009: 322).

Men have around 24 pounds of skeletal muscle mass compared to women, though in this study, women were about 40 percent weaker in the upper body and 33 percent weaker in the lower body (Janssen et al, 2000). Miller et al (1993) found that women had a 45 percent smaller cross-section area in the brachii, 45 in the elbow flexion, 30 percent in the vastus lateralis, and 25 percent smaller CSA in the knee extensors, as I wrote in Muscular Strength by Gender and Race, where I concluded:

The cause for less upper-body strength in women is due the distribution of women’s lean tissue being smaller.

Men have larger fibers, which in my opinion is a large part of the reason for men’s strength advantage over women. Now, even if women were “discouraged” from physical activity, this would be a problem for their bone density. Our bones are porous, and so, by doing a lot of activity, we can strengthen our bones (see e.g., Fausto-Sterling, 2005). Bishop, Cureton, and Collins (1987) show that the sex difference in strength in close-to-equally-trained men and women “is almost entirely accounted for by the difference in muscle size.” Which lends credence to my claim I made above.

Lindle et al (1997) conclude that:

… the results of this study indicate that Con strength levels begin to decline in the fourth rather than in the fifth decade, as was previously reported. Contrary to previous reports, there is no preservation of Ecc compared with Con strength in men or women with advancing age. Nevertheless, the decline in Ecc strength with age appears to start later in women than in men and later than Con strength did in both sexes. In a small subgroup of subjects, there appears to be a greater ability to store and utilize elastic energy in older women. This finding needs to be confirmed by using a larger sample size. Muscle quality declines with age in both men and women when Con peak torque is used, but declines only in men when Ecc peak torque is used. [“Con” and “Ecc” strength refer to concentric and eccentric actions]

Women are shorter than men and have less fat-free muscle mass than men. Women also have a weaker grip (even when matched for height and weight, men had higher levels of lean mass compared to women (92 and 79 percent respectively; Nieves et al, 2009). So men had greater bone mineral density (BMD) and bone mineral content (BMC) compared to women. Now do some quick thinking—do you think that one with weaker bones could be stronger than someone with stronger bones? If person A had higher levels of BMC and BMD compared to person B, who do you think would be stronger and have the ability to do whatever strength test the best—the one with the weaker or stronger muscles? Quite obviously, the stronger one’s bones are the more weight they can bare on them. So if one has weak bones (low BMC/BMD) and they put a heavy load on their back, while they’re doing the lift their bones could snap.

Alswat (2017) reviewed the literature on bone density between men and women and found that men had higher BMD in the hip and higher BMC in the lower spine. Women also had bone fractures earlier than men. Some of this is no doubt cultural, as explained above. However, even if we had a boy and a girl locked in a room for their whole lives and they did the same exact things, ate the same food, and lifted the same weights, I would bet my freedom that there still would be a large difference between the two, skewing where we know it would skew. Women are more likely to suffer from osteoporosis than are men (Sözen, Özışık, and Başaran 2016).

So if women have weaker bones compared to men, then how could they possibly be stronger? Even if men and women had the same kind of physical activity down to the tee, could you imagine women being stronger than men? I couldn’t—but that’s because I have more than a basic understanding of anatomy and physiology and what that means for differences in strength—or running—between men and women.

I don’t doubt that there are cultural reasons that account for the large differences in strength between men and women—I do doubt, though, that the gap can be meaningfully closed. Yes, biology interacts with culture. So the developmental variables that coalesce to make men “Men” and those that coalesce to make women “Women” converge in creating the stark differences in phenotype between the sexes which then explains how the sex differences between the sexes manifest itself.

Differences in bone strength between men and women, along with distribution of lean tissue, differences in lean mass, and differences in muscle size explain the disparity in muscular strength between men and women. You can even imagine a man and woman of similar height and weight and they would, of course, look different. This is due to differences in hormones—the two main players being testosterone and estrogen (see Lang, 2011).

So yes, part of the difference in strength between men and women are rooted in culture and how we view women who strength train (way more women should strength train, as a matter of fact), though I find it hard to believe that even if the “cultural stigma” of the women who lifts heavy weights at the gym disappeared overnight, that women would be stronger than men. Differences in strength exist between men and women and this difference exists due to the complex relationship between biology and culture—nature and nurture (which cannot be disentangled).

DNA—Blueprint and Fortune Teller?

2500 words

What would you think if you heard about a new fortune-telling device that is touted to predict psychological traits like depression, schizophrenia and school achievement? What’s more, it can tell your fortune from the moment of your birth, it is completely reliable and unbiased — and it only costs £100.

This might sound like yet another pop-psychology claim about gimmicks that will change your life, but this one is in fact based on the best science of our times. The fortune teller is DNA. The ability of DNA to understand who we are, and predict who we will become has emerged in the last three years, thanks to the rise of personal genomics. We will see how the DNA revolution has made DNA personal by giving us the power to predict our psychological strengths and weaknesses from birth. This is a game-changer as it has far-reaching implications for psychology, for society and for each and every one of us.

This DNA fortune teller is the culmination of a century of genetic research investigating what makes us who we are. When psychology emerged as a science in the early twentieth century, it focused on environmental causes of behavior. Environmentalism — the view that we are what we learn — dominated psychology for decades. From Freud onwards, the family environment, or nurture, was assumed to be the key factor in determining who we are. (Plomin, 2018: 6, my emphasis)

The main premise of Plomin’s 2018 book Blueprint is that DNA is a fortune teller while personal genomics is a fortune-telling device. The fortune-telling device Plomin most discusses in the book is polygenic scores (PGS). PGSs are gleaned from GWA studies; SNP genotypes are then added up with scores of 0, 1, and 2. Then, the individual gets their PGS for trait T. Plomin’s claim—that DNA is a fortune teller—though, falls since DNA is not a blueprint—which is where the claim that “DNA is a fortune teller” is derived.

It’s funny that Plomin calls the measure “unbiased”, (he is talking about DNA, which is in effect “unbiased”), but PGS are anything BUT unbiased. For example, most GWAS/PGS are derived from European populations. But, for example, there are “biases and inaccuracies of polygenic risk scores (PRS) when predicting disease risk in individuals from populations other than those used in their derivation” (De La Vega and Bustamante, 2018). (PRSs are derived from statistical gene associations using GWAS; Janssens and Joyner, 2019.) Europeans make up more than 80 percent of GWAS studies. This is why, due to the large amount of GWASs on European populations, that “prediction accuracy [is] reduced by approximately 2- to 5-fold in East Asian and African American populations, respectively” (Martin et al, 2018). See for example Figure 1 from Martin et al (2018):

gwass

With the huge number of GWAS studies done on European populations, these scores cannot be used on non-European populations for ‘prediction’—even disregarding the other problems with PGS/GWAS.

By studying genetically informative cases like twins and adoptees, behavioural geneticists discovered some of the biggest findings in psychology because, for the first time, nature and nurture could be disentangled.

[…]

… DNA differences inherited from our parents at the moment of conception are the consistent, lifelong source of psychological individuality, the blueprint that makes us who we are. A blueprint is a plan. … A blueprint isn’t all that matters but it matters more than everything else put together in terms of the stable psychological traits that make us who we are. (Plomin, 2018: 6-8, my emphasis)

Nevermind the slew of problems with twin and adoption studies (Joseph, 2014; Joseph et al, 2015; Richardson, 2017a). I also refuted the notion that “A blueprint is a plan” last year, quoting numerous developmental systems theorists. The main thrust of Plomin’s book—that DNA is a blueprint and therefore can be seen as a fortune teller using the fortune-telling device to tell the fortunes of the people’s whose DNA are analyzed—is false, as DNA does not work how it does in Plomin’s mind.

These big findings were based on twin and adoption studies that indirectly assessed genetic impact. Twenty years ago the DNA revolution began with the sequencing of the human genome, which identified each of the 3 billion steps in the double helix of DNA. We are the same as every other human being for more than 99 percent of these DNA steps, which is the blueprint for human nature. The less than 1 per cent of difference of these DNA steps that differ between us is what makes us who we are as individuals — our mental illnesses, our personalities and our mental abilities. These inherited DNA differences are the blueprint for our individuality …

[DNA predictors] are unique in psychology because they do not change during our lives. This means that they can foretell our futures from our birth.

[…]

The applications and implications of DNA predictors will be controversial. Although we will examine some of these concerns, I am unabashedly a cheerleader for these changes. (Plomin, 2018: 8-10, my emphasis)

This quote further shows Plomin’s “blueprint” for the rest of his book—DNA can “foretell our futures from our birth”—and how it affects his conclusions gleaned from his work that he mostly discusses in his book. Yes, all scientists are biased (as Stephen Jay Gould noted), but Plomin outright claimed to be an unabashed cheerleader for his work. Plomin’s self-admission for being an “unabashed cheerleader”, though, does explain some of the conclusions he makes in Blueprint.

However, the problem with the mantra ‘nature and nurture’ is that it runs the risk of sliding back into the mistaken view that the effects of genes and environment cannot be disentangled.

[…]

Our future is DNA. (Plomin, 2018: 11-12)

The problem with the mantra “nature and nurture” is not that it “runs the risk of sliding back into the mistaken view that the effects of genes and environment cannot be disentangled”—though that is one problem. The problem is how Plomin assumes how DNA works. That DNA can be disentangled from the environment presumes that DNA is environment-independent. But as Moore shows in his book The Dependent Gene—and as Schneider (2007) shows—“the very concept of a gene requires the environment“. Moore notes that “The common belief that genes contain context-independent “information”—and so are analogous to “blueprints” or “recipes”—is simply false” (quoted in Schneider, 2007). Moore showed in The Dependent Gene that twin studies are flawed, as have numerous other authors.

Lewkowicz (2012) argues that “genes are embedded within organisms which, in turn, are embedded in external environments. As a result, even though genes are a critical part of developmental systems, they are only one part of such systems where interactions occur at all levels of organization during both ontogeny and phylogeny.” Plomin—although he does not explicitly state it—is a genetic reductionist. This type of thinking can be traced back, most popularly, to Richard Dawkins’ 1976 book The Selfish Gene. The genetic reductionists can, and do, make the claim that organisms can be reduced to their genes, while developmental systems theorists claim that holism, and not reductionism, better explains organismal development.

The main thrust of Plomin’s Blueprint rests on (1) GWA studies and (2) PGSs/PRSs derived from the GWA studies. Ken Richardson (2017b) has shown that “some cryptic but functionally irrelevant genetic stratification in human populations, which, quite likely, will covary with social stratification or social class.Richardson’s (2017b) argument is simple: Societies are genetically stratified; social stratification maintains genetic stratification; social stratification creates—and maintains—cognitive differentiation; “cognitive” tests reflect prior social stratification. This “cryptic but functionally irrelevant genetic stratification in human populations” is what GWA studies pick up. Richardson and Jones (2019) extend the argument and argue that spurious correlations can arise from genetic population structure that GWA studies cannot account for—even though GWA study authors claim that this population stratification is accounted for, social class is defined solely on the basis of SES (socioeconomic status) and therefore, does not capture all of what “social class” itself captures (Richardson, 2002: 298-299).

Plomin also heavily relies on the results of twin and adoption studies—a lot of it being his own work—to attempt to buttress his arguments. However, as Moore and Shenk (2016) show—and as I have summarized in Behavior Genetics and the Fallacy of Nature vs Nurture—heritability estimates for humans are highly flawed since there cannot be a fully controlled environment. Moore and Shenk (2016: 6) write:

Heritability statistics do remain useful in some limited circumstances, including selective breeding programs in which developmental environments can be strictly controlled. But in environments that are not controlled, these statistics do not tell us much. In light of this, numerous theorists have concluded that ‘the term “heritability,” which carries a strong conviction or connotation of something “[in]heritable” in the everyday sense, is no longer suitable for use in human genetics, and its use should be discontinued.’ 31 Reviewing the evidence, we come to the same conclusion.

Heritability estimates assume that nature (genes) can be separated from nurture (environment), but “the very concept of a gene requires the environment” (Schneider, 2007) so it seems that attempting to partition genetic and environmental causation of any trait T is a fool’s—reductionist—errand. If the concept of gene depends on and requires the environment, then how does it make any sense to attempt to partition one from the other if they need each other?

Let’s face it: Plomin, in this book Blueprint is speaking like a biological reductionist, though he may deny the claim. The claims from those who push PRS and how it can be used for precision medicine are unfounded, as there are numerous problems with the concept. Precision medicine and personalized medicine are similar concepts, though Joyner and Paneth (2015) are skeptical of its use and have seven questions for personalized medicine. Furthermore, Joyner, Boros and Fink (2018) argue that “redundant and degenerate mechanisms operating at the physiological level limit both the general utility of this assumption and the specific utility of the precision medicine narrative.Joyner (2015: 5) also argues that “Neo-Darwinism has failed clinical medicine. By adopting a broader perspective, systems biology does not have to.

Janssens and Joyner (2019) write that “Most [SNP] hits have no demonstrated mechanistic linkage to the biological property of interest.” Researchers can show correlations between disease phenotypes and genes, but they cannot show causation—which would be mechanistic relations between the proposed genes and the disease phenotype. Though, as Kampourakis (2017: 19), genes do not cause diseases on their own, they only contribute to its variation.

Edit: Take also this quote from Plomin and Stumm (2018) (quoted by Turkheimer):

GPS are unique predictors in the behavioural sciences. They are an exception to the rule that correlations do not imply causation in the sense that there can be no backward causation when GPS are correlated with traits. That is, nothing in our brains, behaviour or environment changes inherited differences in DNA sequence. A related advantage of GPS as predictors is that they are exceptionally stable throughout the life span because they index inherited differences in DNA sequence. Although mutations can accrue in the cells used to obtain DNA, like any cells in the body these mutations would not be expected to change systematically the thousands of inherited SNPs that contribute to a GPS.

Turkheimer goes on to say that this (false) assumption by Plomin and Stumm (2018) assumes that there is no top-down causation—i.e., that phenotypes don’t cause genes, or there is no causation from the top to the bottom. (See the special issue of Interface Focus for a slew of articles on top-down causation.) Downward causation exists in biological systems (Noble, 2012, 2017), as does top-down. The very claim that “nothing in our brains, behaviour or environment changes inherited differences in DNA sequence” is ridiculous! This is something that, of course, Plomin did not discuss in Blueprint. But in a book that, supposedly, shows “how DNA makes us who we are”, why  not discuss epigenetics? Plomin is confused, because DNA methylation impacts behavior and behavior impacts DNA methylation (Lerner and Overton, 2017: 114). Lerner and Overtone (2017: 145) write that:

… it should no longer be possible for any scientist to undertake the procedure of splitting of nature and nurture and, through reductionist procedures, come to conclusions that the one or the other plays a more important role in behavior and development.

Plomin’s reductionist takes, therefore again, fail. Plomin’s “reluctance” to discuss “tangential topics” to “inherited DNA differences” included epigenetics (Plomin, 2018: 12). But it seems that his “reluctance” to discuss epigenetics was a downfall in his book as epigenetic mechanisms can and do make a difference to “inherited DNA differences” (see for example, Baedke, 2018, Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics and Meloni, 2019, Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics see also Meloni, 2018). The genome can and does “react” to what occurs to the organism in the environment, so it is false that “nothing in our brains, behaviour or environment changes inherited differences in DNA sequence” (Plomin and Stumm, 2018), since our behavior and actions can and do methylate our DNA (Meloni, 2014) which falsifies Plomin’s claim and which is why he should have discussed epigenetics in BlueprintEnd Edit

Conclusion

So the main premise of Plomin’s Blueprint is his two claims: (1) that DNA is a fortune teller and (2) that personal genomics is a fortune-telling device. He draws these big claims from PGS/PRS studies. However, over 80 percent of GWA studies have been done on European populations. And, knowing that we cannot use these datasets on other, non-European datasets, greatly hampers the uses of PGS/PRS in other populations—although the PGS/PRS are not that useful in and of itself for European populations. Plomin’s whole book is a reductionist screed—“Sure, other factors matter, but DNA matters more” is one of his main claims. Though, a priori, since there is no privileged level of causation, one cannot privilege DNA over any other developmental variables (Noble, 2012). To understand disease, we must understand the whole system and how when one part of the system becomes dysfunctional how it affects other parts of the system and how it runs. The PGS/PRS hunts are reductionist in nature, and the only answer to these reductionist paradigms are new paradigms from systems biology—one of holism.

Plomin’s assertions in his book are gleaned from highly confounded GWA studies. Plomin also assumes that we can disentangle nature and nurture—like all reductionists. Nature and nurture interact—without genes, there would be an environment, but without an environment, there would be no genes as gene expression is predicated on the environment and what occurs in it. So Plomin’s reductionist claim that “Our future is DNA” is false—our future is studying the interactive developmental system, not reducing it to a sum of its parts. Holistic biology—systems biology—beats reductionist biology—the Neo-Darwinian Modern Synthesis.

DNA is not a blueprint nor is it a fortune teller and personal genomics is not a fortune-telling device. The claim that DNA is a blueprint/fortune teller and personal genomics is a fortune-telling device come from Plomin and are derived from highly flawed GWA studies and, further, PGS/PRS. Therefore Plomin’s claim that DNA is a blueprint/fortune teller and personal genomics is a fortune-telling device are false.

(Also read Erick Turkheimer’s 2019 review of Plomin’s book The Social Science Blues, along with Steve Pitteli’s review Biogenetic Overreach for an overview and critiques of Plomin’s ideas. And read Ken Richardson’s article It’s the End of the Gene As We Know It for a critique of the concept of the gene.)

Prediction, Accommodation, and Explanation in Science: Are Just-so Stories Scientific?

2300 words

One debate in the philosophy of science is whether or not a scientific hypothesis should make testable predictions or merely explain only what it purports to explain. Should a scientific hypothesis H predict previously unknown facts of the matter or only explain an observation? Take, for example, evolutionary psychology (EP). Any EP hypothesis H can speculate on the so-called causes that led a trait to fixate in a biological population of organisms, but the claim that they can do more than that—that is, that they can generate successful predictions of previously unknown facts not used in the construction of the hypothesis—but that’s all they can do. The claim, therefore, that EP hypotheses are anything but just-so stories, is false.

Prediction and novel facts

For example, Einstein’s theory of general relativity predicted the bending of light, which was a novel prediction for the hypothesis (see pg 177-180 for predictions generated from Einstein’s theory). Fresnel’s wave theory of light predicted different infraction fringes to the prediction of the white spot—a spot which appears in a circular object’s shadow due to Fresnel diffraction (see Worrall, 1989). So Fresnel’s theory explained the diffraction and the diffraction then generated testable—and successful—novel predictions (see Magnus and Douglas, 2013). There is an example of succeful novel prediction. Ad hoc hypotheses are produced “for this” explanation—so the only evidence for the hypothesis is, for example, the existence of trait T. EP hypotheses attempt to explain the fixation of any trait T in humans, but all EP hypotheses do is explain—they generate no testable, novel predictions of previously unknown facts.

A defining feature of science and what it purports to do is to predict facts-of-the-matter which are yet to be known. John Beerbower (2016) explains this well in his book Limits of Science? (emphasis mine):

At this point, it seems appropriate to address explicitly one debate in the philosophy of science—that is, whether science can, or should try to, do more than predict consequences. One view that held considerable influence during the first half of the twentieth venture is called the predictivist thesis: that the purpose of science is to enable accurate predictions and that, in fact, science cannot actually achieve more than that. The test of an explanatory theory, therefore, is its success at prediction, at forecasting. This view need not be limited to actual predictions of future, yet to happen events; it can accommodate theories that are able to generate results that have already been observed or, if not observed, have already occurred. Of course, in such cases, care must be taken that the theory has not simply been retrofitted to the observations that have already been made—it must have some reach beyond the data used to construct the theory.

That a theory or hypothesis explains observations isn’t enough—it must generate successful predictions of novel facts. If it does not generate any novel facts-of-the-matter, then of what use is the hypothesis if it only weakly justifies the phenomenon in question? So now, what is a novel fact?

A novel fact is a fact that’s generated by hypothesis H that’s not used in the construction of the hypothesis. For example, Musgrave (1988) writes:

All of this depends, of course, on our being able to make good the intuitive distinction between prediction and novel prediction. Several competing accounts of when a prediction is a novel prediction for a theory have been produced. The one I favour, due to Elie Zahar and John Worral says that a predicted fact is a novel fact for a theory if it was not used to construct that theory  — where a fact is used to construct a theory if it figures in the premises from which that theory was deduced.

Mayo (1991: 524; her emphasis) writes that a “novel fact [is] a newly discovered fact—one not known before used in testing.” So a fact is novel when it predicts a fact of the matter not used in the construction of the hypothesis—i.e., a future event. About novel predictions, Musgrave also writes that “It is only novel predictive success that is surprising, where an observed fact is novel for a theory when it was not used to construct it.” So hypothesis H entails evidence E; evidence E is not used in the construction of hypothesis H, therefore E is novel evidence for hypothesis H.

To philosopher of science Imre Lakatos, a progressive research program is one that generates novel facts, whereas a degenerating research program either fails to generate novel facts or the predictions made that were novel continue to be falsified, according to Musgrave in his article on Lakatos. We can put EP in the “degenerating research program, as no EP hypothesis generates any type of novel prediction—the only evidence for the trait is the existence of the trait.

Evolutionary Psychology

The term “just-so stories” comes from Rudyard Kipling Just-so Stories for Little Children. Then Gould and Lewontin used the term for evolutionary hypotheses that can only explain and not predict future as-of-yet-known events. Law (2016) notes that just-so stories offer “little in the way of independent evidence to suggest that it is actually true.Sterelny and Griffiths (1999: 61) state that just-so stories are “… an adaptive scenario, a hypothesis about what a trait’s selective history might have been and hence what its function may be.” Examples of just-so stories covered on this blog include: beards, FOXP2, cartels and Mesoamerican ritual sacrifice, Christian storytelling, just-so storytellers and their pet just-so stories, the slavery hypertension hypothesis, fear of snakes and spiders, and cold winter theorySmith (2016: 278) has a helpful table showing ten different definitions and descriptions of just-so stories:

justso

So the defining criterion for just-so stories is that there must be independent evidence to believe the proposed explanation for the existence of the trait. There must be independent reasons to believe a certain hypothesis, as the defining feature of a scientific hypothesis or theory is whether or not it can predict yet-to-happen events. Though, as Beerbower notes, we have to be careful that we do not retrofit the observations.

One can make an observation. Then they can work backward (what Richardson (2007) elicits is “reverse engineering”) and posit (speculate about) a good-sounding story (just-so storytelling) to explain this observation. Reverse engineering is “a process of figuring out the design of a mechanism on the basis of an analysis of the tasks it performs” (Buller, 2005: 92). Of course, the just-so storyteller can then create a story to explain the fixation of the trait in question. But that’s only (purportedly) the explanation of why the trait came to fixation for us to observe it today. There are no testable predictions of previously unknown facts. So it’s all storytelling—speculation.

The theory of natural selection is then deployed to attempt the explain the fixation of trait T in any population. It is true that a hypothesis is weakly corroborated by the existence of trait T, but what makes it a just-so story is the fact that there are no successful predictions of previously unknown facts,

When it comes to EP, one can say that the hypothesis “makes sense” and it “explains” why trait T still exists and went to fixation. However, the story only “makes sense” because there is no other way for it to be—if the story didn’t “make sense”, then the just-so storyteller wouldn’t be telling the story because it wouldn’t satisfy their aims of “proving” that a trait is an adaptation.

Smith (2016:277-278) notes 7 just-so story triggers:

1) proposing a theory-driven rather than a problem-driven explanation, 2) presenting an explanation for a change without providing a contrast for that change, 3) overlooking the limitations of evidence for distinguishing between alternative explanations (underdetermination), 4) assuming that current utility is the same as historical role, 5) misusing reverse engineering, 6) repurposing just-so stories as hypotheses rather than explanations, and 7) attempting to explain unique events that lack comparative data.

EP is most guilty of (3), (4), (5), (6), and (7). It is guilty of (3) in that it hardly ever posits other explanations for trait T, it’s always “adaptation”, as EP is an adaptationist paradigm. It is guilty of (4) perhaps the most. That trait T still exists and is useful for this today is not evidence that trait T was selected-for its use as we see it today. This then leads to  (5) which is the misuse of reverse engineering. Just-so stories are ad hoc (“for this”) explanations and these types of explanations are ad hoc if there is no independent data for the hypothesis. Of course, it is guilty of (7) in that it attempts to explain, of course, unique events in human evolution. Many problems exist for evolutionary psychology (see for example Samuels, 1998; Lloyd, 1999Prinz, 2006;), but the biggest problem is the ability of any hypothesis to generate testable, novel predictions. Smith (2016: 279) further writes that:

An important weakness in the use of narratives for scientific purposes is that the ending is known before the narrative is constructed. Merton pointed out that a “disarming characteristic” of ex post facto explanations is that they are always consistent with the observations because they are selected to be so.

Bo Winegard, in his defense of just-so storytelling, writes “that inference to the best explanation most accurately describes how science is (and ought to be) practiced. According to this description, scientists forward theories and hypotheses that are coherent, parsimonious, and fruitful.” However, as Smith (2016: 280-281) notes, that a hypothesis is “coherent”, “parsimonious” and “fruitful” (along with 11 more explanatory virtues of IBE, including depth, precision, consilience, and simplicity) is not sufficient to accept IBE—IBE is not a solution to the problems proposed by the just-so story critics as the slew of explanatory virtues do not lend evidence that T was an adaptation and thusly do not lend evidence that hypothesis H is true.

Simon (2018: 5) concludes that “(1) there is much rampant speculation in evolutionary psychology as to the reasons and the origin for certain traits being present in human beings, (2) there is circular reasoning as to a particular trait’s supposed advantage in adaptability in that a trait is chosen and reasoning works backward to subjectively “prove” its adaptive advantage, (3) the original classical theory is untestable, and most importantly, (4) there are serious doubts as to Natural Selection, i.e., selection through adaptive advantage, being the principal engine for evolution.” (1) is true since that’s all EP is—speculation. (2) is true in evolutionary psychologists notice trait T and that, since it survived today, there must be a function it performs for why natural selection “selected” the trait to propagate in species (though selection cannot select-for certain traits). (3) it is untestable in that we have no time machine to go back and watch how trait T evolved (this is where the storytelling narrative comes in: if only we had a good story to tell about the evolution of trait T). And finally, (4) is also true since natural selection is not a mechanism (see Fodor, 2008; Fodor and Piattelli-Palmarini, 2010).

EP exists in an attempt to explain so-called psychological adaptations humans have to the EEA (environment of evolutionary adaptiveness). So one looks at the current phenotype and then looks to the past in an attempt to construct a “story” which shows how a trait came to fixation. There are, furthermore, no hallmarks of adaptation. When one attempts to use selection theory to explain the fixation of trait T, they must wrestle with spandrels. Spandrels are heritable, can increase fitness, and they are selected as well—as the whole organism is selected. This also, of course, falls right back to Fodor’s (2008) argument against natural selection. Fodor (2008: 1) writes that the central claim of EP “is that heritable properties of psychological phenotypes are typically adaptations; which is to say that they are typically explained by their histories of selection.” But if “psychological phenotypes” cannot be selected, then the whole EP paradigm crumbles.

Conclusion

This is why EP is not scientific. It cannot make successful predictions of previously unknown facts not used in the construction of the hypothesis, it can only explain what it purports to explain. The claim, therefore, that EP hypotheses are anything but just-so stories is false. One can create good-sounding narratives for any type of trait. But that they “sound good” to the ear, and are “plausible” are not reasons to believe that the story told is true.

Are all hypotheses just-so stories? No. Since a just-so story is an ad hoc hypothesis and a hypothesis is ad hoc if it cannot be independently verified, then a hypothesis that makes predictions which can be independently verified are not just-so stories. There are hypotheses that generate no predictions, ad hoc hypotheses (where the only evidence to believe H is the existence of trait T), and hypotheses that generate novel predictions. EP is the second of these—the only evidence we have to believe H is true is that trait T exists. Independent evidence is a necessary condition of science—that is, the ability of a hypothesis to predict novel evidence is a necessary condition for science. That no EP hypothesis can generate a successful novel prediction is evidence that all EP hypotheses are just-so stories. So for the criticism to be refuted, one would have to name an EP hypothesis that is not a just-so story—that is, (1) name an EP hypothesis, (2) state the prediction, and then (3) state how the prediction follows from the hypothesis.

To be justified in believing hypothesis H in explaining how trait T became fixated in a population there must be independent evidence for this belief. The hypothesis must generate a novel fact which was previously unknown before the hypothesis was constructed. If the hypothesis cannot generate any predictions, or the predictions it makes are continuously falsified, then the hypothesis is to be rejected. No EP hypothesis can generate successful predictions of novel facts and so, the whole EP enterprise is a degenerative research program. The EP paradigm explains and accommodates, but no EP hypothesis generates independently confirmable evidence for any of its hypotheses. Therefore EP is not a scientific program and just-so stories are not scientific.

Just-so Stories: Cartels and Mesoamerican Ritual Sacrifice

1550 words

Mexican drug cartels kill in some of the most heinous ways I’ve ever seen. I won’t link to them here, but a simple Google search will show you the brutal, heinous ways in which they kill rivals and snitches. Why do they kill like this? I have a simple just-so story to explain it: Mexican drug cartels—and similar groups—kill the way they do because they are descended from Aztecs, Maya, and other similar groups who enacted ritual sacrifices to appease their gods.

For example, Munson et al (2014) write:

Among the most noted examples, Aztec human sacrifice stands out for its ritual violence and bloodshed. Performed in the religious precincts of Tenochitlan, ritual sacrifice was a primary instrument for social integration and political legitimacy that intersected with militaristic and marketplace practices, as well as with beliefs about the cosmological order . Although human sacrifice was arguably less common in ancient Maya society, physical evidence indicates that offerings of infant sacrifices and other rituals involving decapitation were important religious practices during the Classic period .

The Aztecs believed that sacrificial blood-letting appeased their gods who fed on the human blood. They also committed the sacrifices “so that the sun could continue to follow its course” (Garraud and Lefrere, 2014). Their sun god—Uitzilopochtli—was given strength by sacrificial bloodletting, which benfitted the Aztec population “by postponing the end of the world” (Trewby, 2013). The Aztecs also sacrificed children to their rain god Tlaloc (Froese, Gershenson, and Manzanilla, 2014). Further, the Aztec ritual of cutting out still-beating hearts arose from the Maya-Toltec traditions (Ceruti, 2015).

Regarding Aztec sacrifices, Winkelman (2014: 50) writes:

Anthropological efforts to provide a scientific explanation for human sacrifice and cannibalism were initiated by Harner (1970, 1977a, 1977b). Harner pointed out that the emic normalcy of human sacrifice—that it is required by one’s gods and religion—does not alone explain why such beliefs and behaviours were adopted in specific societies. Instead, Harner proposed explanations based upon causal factors found in population pressure. Harner suggested that the magnitude of Aztec human sacrifice and cannibalism was caused by a range of demographic-ecological conditions—protein shortages, population pressure, unfavourable agricultural conditions, seasonal crop failures, the lack of domesticated herbivores, wild game depletion, food scarcity and famine, and environmental circumscription limiting agricultural expansion.

So, along with appeasing and “feeding” their gods, there were sociological reasons for why they committed human sacrifices, and even cannibalism.

When it comes to the Maya (a civilization that independently discovered numerous things while being completely isolated from other civilizations), they had a game called pok-ta-tok—due to the sound the ball made when the players hit it or it fell on the ground. Described in the Popul Vuh (the Ki’iche Maya book that lays out their creation myth), humans and the lords of the Underworld played this game. The Maya Hero Twins Hunahpu and Xbalanque went to the Underworld to do battle against the lords of the Underworld—called Xibalba (see Zaccagnini, 2003: 16-20 for a description of the myth Maya Hero Twins and how it relates to pok-ta-tok and also Myers (2002: 6-13)). See Tokovinine (2002) for more information on pok-ta-tok.

This game was created by the Olmec, a pre-cursor people to the Maya, and later played by the Aztecs. The court was seen as the portal to Xibalba. The Aztec then started playing the game and continued the tradition of murdering the losing team. The rubber ball [1] weighed around ten pounds, and so it must have caused a lot of bruising and head injuries to players who got hit in the head and body with the ball—as they used their forearms and thighs to pass the ball. (See The Brutal and Bloody History of the Mesoamerican Ball Game, Where Sometimes Loss Was Death.)

According to Zaccagnini (2003: 6)The ballgame was executed for many reasons, which include social functions, for recreation or the mediation of conflict for instance, the basis for ritualized ceremony, and for political purposes, such as acting as a forum for the opposing groups to compete for political status (Scarborough 1991:141).Zaccagnini (2003: 7-8) states that the most vied-for participants in the game were captured Maya kings and that they were considered “trophies” of the kings’ people who captured them. Those who were captured had to play the game and they were—essentially—fighting (playing) for their lives. The Maya used the game for a stand-in for war, which is seen in the fact that they played with invading Toltecs in their region (Zaccagnini, 2003: 8).

Death by decapitation occurred to the losers of the game, and, sometimes, skulls of the losing players were used inside of the rubber balls they used to play the game. The Maya word for ball—quiq—literally means “sap” or “blood” which refers to how the rubber ball itself was constructed. Zaccagnini (2003: 11) notes that “The sap can be seen as a metaphoric blood which flows from the tree to give rise to the execution of the ballgame and in this respect, can imply further meaning. The significance of blood in the ballgame, which implies death, is tremendous and this interpretation of the connection of blood and the ball correlated with the notion that the ball is synonymous with the human head is important.” (See both Zaccagnini, (2003) and Tokovinine (2002) for pictures of Maya hieroglyphs which depict winning and losing teams, decapitations, among other things.)

So, the game was won when the ball passed through the hoop which was 20-30 feet in the air, hanging from a wall. These courts, too, were linked to celestial events that occurred (Zaccagnini, 2003). It has been claimed that the ball passing through the hoop was a depiction of the earth passing through the center of the Milky Way.

Avi Loeb notes thatThe Mayan culture collected exquisite astronomical data for over a millennium with the false motivation that such data would help predict its societal future. This notion of astrology prevented the advanced Mayan civilization from developing a correct scientific interpretation of the data and led to primitive rituals such as the sacrifice of humans and acts of war in relation to the motions of the Sun and the planets, particulary Venus, on the sky.” The planets and constellations, of course, were also of importance in the Maya society. Šprajc (2018) notes that “Venus was one of the most important celestial bodies”, while also stating:

Human sacrifices were believed necessary for securing rain, agricultural fertility, and a proper functioning of the universe in general. Since the captives obtained in battles were the most common sacrificial victims, the military campaigns were religiously sanctioned, and the Venus-rain-maize associations became involved in sacrificial symbolism and warfare ritual. These ideas became a significant component of political ideology, fostered by rulers who exploited them to satisfy their personal ambitions and secular goals. In sum, the whole conceptual complex surrounding the planet Venus in Mesoamerica can be understood in the light of both observational facts and the specific socio-political context.

The relationship between the ballgame, Venus, and the fertility of the land in regard to the agricultural cycle and Venus is also noted by Šprajc (2018). The Maya were expert astronomers and constantly watched the skies and interpreted certain things that occurred in the cosmos in the context of their beliefs.

I have just described the ritualistic sacrifices of the Maya. This, then, is linked to my just-so story, which I first espoused on Twitter back in July of 2018:

Then in January of this year, white nationalist Angelo John Gage unironically used my just-so story!:

Needless to say, I found it hilarious that it was used unironically. Of course, since Mexicans and other Mesoamericans are descendants of the Aztec, Maya and other Indian groups native to the area, one can make this story “fit with” what we observe today. Going back to the analysis above of the Maya ballgame pok-ta-tok, the Maya were quite obviously brutal in their decapitations of the losing teams of the game. Since they decapitated the losing players, this could be seen as a sort of cultural transmission of certain actions (though I strongly doubt that that is why cartels and similar groups kill in the way they do—the exposition of the just-so story is just a funny joke to me).

In sum, my just-so story for why Mexican drug cartels and similar groups kill in the way they do is, as Smith (2016: 279) notes “always consistent with the [observation] because [it is] selected to be so.” The reasons why the Aztecs, Maya, and other Mesoamerican groups participated in these ritualistic sacrifices are numerous: appeasing gods, for agricultural fertility, to cannibalism and related things. There were various ecological reasons why the Aztecs may have committed human sacrifice, and it was—of course—linked back to the gods they were trying to appease.

The ballgame they played attests to the layout of their societies and how it made their societies function in the context of their beliefs regarding appeasing their numerous gods. When the Spanish landed at Mesoamerica and made first contact with the Maya, it took them nearly two centuries to defeat them—though the Maya population was already withering away due to climate change and other related factors (I will cover this in a future article). Although the Spanish destroyed many—if not most—Maya codices, we can glean important information of their lifestyle and how and why they played their ballgame which ended in the ritualistic sacrifice of the losing team.

How Things Change: Perspectives on Intelligence in Antiquity

1300 words

The cold winter theory (CWT) is a theory that purports to explain why those whose ancestors evolved in colder climes are more “intelligent” than those whose ancestors evolved in warmer climes. Popularized by Rushton (1997), Lynn (2006), and Kanazawa (2012), the theory—supposedly—accounts for the “haves” and the “have not” in regard to intelligence. However, the theory is a just-so story, that is, it explains what it purports to explain without generating previously unknown facts not used in the construction of the theory. PumpkinPerson is irritated by people who do not believe the just-so story of the CWT writing (citing the same old “challenges” as Lynn which were dispatched by McGreal):

The cold winter theory is extremely important to HBD.  In fact I don’t even understand how one can believe in racially genetic differences in IQ without also believing that cold winters select for higher intelligence because of the survival challenges of keeping warm, building shelter, and hunting large game.

The CWT is “extremely important to HBD“, as PP claims, since there needs to be an evolutionary basis for population differences in “intelligence” (IQ). Without the just-so story, the claim that racial differences in “intelligence” are “genetically” based crumbles.

Well, here is the biggest “challenge” (all other refutations of it aside) to the CWT. Notions of which population are or are not “intelligent” change with the times. The best example is what the Greeks—specifically Aristotle—wrote about the intelligence of those who lived in the north. Maurizio Meloni, in his 2019 book Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics captures this point (pg 41-42; emphasis his):

Aristotle’s Politics is a compendium of all these ideas [Orientals being seen as “softer, more delicate and unwarlike” along with the structure of militaries], with people living in temperate (mediocriter) places presented as the most capable of producing the best political systems:

“The nations inhabiting the cold places and those of Europe are full of spirit but somewhat deficient in intelligence and skill, so that they continue comparatively free, but lacking in political organization and the capacity to rule their neighbors. The peoples of Asia on the other hand are intelligent and skillful in temperament, but lack spirit, so that they are in continuous subjection and slavery. But the Greek race participates in both characters, just as it occupies the middle position geographically, for it is both spirited and intelligent; hence it continues to be free and to have very good political institutions, and to be capable of ruling all mankind if it attains constitutional unity.” (Pol. 1327b23-33, my italics)

Views of direct environmental influence and the porosity of bodies to these effects also entered the military machines of ancient empires, like that of the Romans. Offices such as Vegetius (De re militari, I/2) suggested avoiding recruiting troops from cold climates as they had too much blood and, hence, inadequate intelligence. Instead, he argued, troops from temperate climates be recruited, as they possess the right amount of blood, ensuring their fitness for camp discipline (Irby, 2016). Delicate and effemenizing land was also to be abandoned as soon as possible, according Manilius and Caesar (ibid). Probably the most famous geopolitical dictum of antiquity reflects exactly this plastic power of places: “soft lands breed soft men”, according to the claim that Herodotus attributed to Cyrus.

Isn’t that weird, how things change? Quite obviously, which population is or is not “intelligent” is based on the time and place of the observation. Those in northern Europe, who are purported to be more intelligent than those who live in temperate, hotter climes—back in antiquity—were seen to be less intelligent in comparison to those who lived in more temperate, hotter climes. Imagine stating what Aristotle said thousands of years ago in the present day—those who push the CWT just-so story would look at you like you’re crazy because, supposedly, those who live in and evolved in colder climes had to plan ahead and faced a tougher environment in comparison to those who lived closer to the equator.

Imagine we could transport Aristotle to the present day. What would he say about our perspectives on which population is or is not intelligent? Surely he would think it ridiculous that the Greeks today are less “intelligent” than those from northern Europe. But that only speaks to how things change and how people’s perspectives on things change with the times and who is or is not a dominant group. Now imagine that we can transport someone (preferably an “IQ” researcher) to antiquity when the Greeks were at the height of their power. They would then create a just-so story to justify their observations about the intelligence of populations based on their evolutionary history.

Anatoly Karlin cites Galton, who claims that ancient Greek IQ was 125, while Karlin himself claims IQ 90. I cite Karlin’s article not to contest his “IQ estimates”—nor Galton’s—I cite it to show the disparate “estimates” of the intelligence of the ancient Greeks. Because, according to the Greeks, they occupied the middle position geographically, and so they were both spirited and intelligent compared to Asians and northern Europeans.

This is similar to Wicherts, Boorsboom, and Dolan (2010) who responded to Rushton, Lynn, and Templer. They state that the socio-cultural achievements of Mesopotamia and Egypt stand in “stark contrast to the current low level of national IQ of peoples of Iraq and Egypt and that these ancient achievements appear to contradict evolutionary accounts of differences in national IQ. One can make a similar observation about the Maya. Their cultural achievements stand in stark contrast to their “evolutionary history” in warm climes. The Maya were geographically isolated from other populations and they still created a writing system (independently) along with other cultural achievements that show that “national IQs” are irrelevant to what the population achieved. I’m sure an IQ-ist can create a just-so story to explain this away, but that’s not the point.

Going back to what Karlin and Galton stated about Greek IQ, their IQ is irrelevant to their achievements. Whether or not their IQ was 120-125 or 90 is irrelevant to what they achieved. To the Mesopotamians and Egyptians, they were more intelligent than those from northern climes. They would, obviously, think that based on their achievements and the lack of achievements in the north. The achievements of peoples in antiquity would paint a whole different picture in regard to an evolutionary theory of human intelligence—and its distribution in human populations.

So which just-so story (ad hoc hypothesis) should we accept? Or should we just accept that which population is or is not “intelligent” and capable of constructing militaries is contingent based on the time and the place of the observation? Looking at “national IQs” of peoples in antiquity would show a huge difference in comparison to what we observe today about the “national IQs” (supposedly ‘intelligence’) of populations around the world. In antiquity, those who lived in temperate and even hotter climes had greater achievements than others. Greeks and Romans argued that peoples from northern climes should not be enlisted in the military due to where they were from.

These observations from the Greeks and Romans about who and who not to enlist in the military, along with their thoughts on Northern Europeans prove that perspectives on which population is or is not “intelligent” is contingent based on the time and place. This is why “national IQs” should not be accepted, not even accounting for the problems with the data (Richardson, 2004; also see Morse, 2008; also see The Ethics of Development: An Introduction by Ingram and Derdak, 2018). Seeing the development of countries/populations in antiquity would lead to a whole different evolutionary theory of the intelligence of populations, proving the contingency of the observations.

Race/Ethnic Differences in Dentition

1300 words

Different groups of people eat different things. Different groups of people also differ genetically. What one eats is part of their environment. So, there is a G and E (genes and environment) interaction between races/ethnies in regard to the shape of their teeth. Yes, one can have a different shape to their teeth, on average, compared to their co-ethnics if they eat different things from them as that is one thing that shapes the development of teeth.

It is very difficult to ascertain the race of an individual through their dentition, but there are certain dental characters which can lead to the identification of race. Rawlani et al (2017) show that there are differences in the dentition of Caucasians, Negroids, Mongoloids and Australoids.

One distinct difference that Monogloid teeth have is having a “shovel” or “scoop” appearance. They also have larger incisors than Caucasoids, while having shorter anatomic roots with better-developed trunks. Caucasoids had a “v” shape to their teeth, while their anterior teeth were “chisel shaped”; 37 percent of Caucasoids had a cusp on the carabelli cusp. Rawlani et al (2017) also note that one study found that 94 percent of Anglo-Saxons had four cusps compared to five for other races. Australoids had a larger arch size (but relatively smaller anterior teeth), which accommodates larger teeth. They have the biggest molars of any race; the mesiodistal diameter of the first molar is 10 percent larger than white Americans and Norweigian Lapps. Negroids had smaller teeth with more spacing, they are also less likely to have the Carabelli cusp and shovel incisors. They are more likely to have class III malocclusion (imperfect positioning of the teeth when the jaw is closed) and open bite. Blacks are more likely to have bimaxillary protrusion, though Asians do get orthodontic surgery for it (Yong-Ming et al, 2009).

Rawlani et al’s (2017) review show that there are morphologic differences in teeth between racial groups that can be used for identification.

When it comes to the emergence of teeth, American Indians (specifically Northern Plains Indians) had an earlier emergence of teeth compared to whites and blacks. American Indian children had a higher rate of dental caries, and so, since their teeth appear at an earlier age compared to whites and blacks, they had more of a chance for their teeth to be exposed to diets high in sugar and processed foods along with lack of oral hygiene (Warren et al, 2016).

Older blacks had more decayed teeth than whites in one study (Hybels et al, 2016). Furthermore, older blacks were more likely than older whites to self-report worse oral hygeine; blacks had a lower number of teeth than whites in this study—which was replicated in other studies—though differences in number of teeth may come down to differences in access to dental care along with dental visits (Huang and Park, 2016). One study even showed that there was unconscious racial bias in regard to root canal treatments: whites were more likely to get root canals (i.e., they showed a bias in decision-making favoring whites), whereas blacks were more likely to get the tooth pulled (Patel et al, 2018).

Kressin et al (2003) also show that blacks are less likely to receive root canals than whites, while Asians were more likely, which lends further credence to the claim of unconscious racial bias. So just like unconscious bias affects patients in regard to other kinds of medical treatment, the same is true for other doctors such as dentists: they have a racial bias which then affects the care they give their patients. Gilbert, Shewchuk, and Litaker (2006) also show that blacks are more likely to have tooth extractions when compared to other races, but people who went to a practice that had a higher percentage of black Americans were more likely to have a tooth extraction, regardless of the individual’s race. This says to me that, since there is unconscious bias in tooth extraction (root canals), that the more black patients a dentist sees the more it is likely that they would extract the tooth of the patient (regardless of race), since they would do that more often than not due to the number of patients they see that are black Americans.

Otuyemi and Noar (1996) showed that Nigerian children had larger mesio-distal crown diameters compared to Briton children. American blacks are more likely to have hyperdontia (extra teeth in the mouth) compared to whites, and are also more likely to have fourth molars and extra premolars (Harris and Clark, 2008). Blacks have slightly larger teeth than whites (Parciak, 2015).

Dung et al (2019) also note ethnic differences in teeth looking at four ethnic groups in Vietnam:

Our study of 4565 Vietnamese children of four ethnic groups (Kinh, Tay, Thai and Muong) showed that most dental arch indicators in males were statistically significantly higher than those in females.

[…]

In comparison to other ethnic groups, 12-year-old Vietnamese children had similar dimensions of the upper and lower intercanine and intermolar width to children in the same age group in South China. However, the average upper posterior length 1 and lower posterior length 1 were shorter than those in Africans (Kenyan) and Caucasian (American blacks aged 12). The 12-year-old Vietnamese have a narrower and shorter dental arch than Caucasian children, especially the maxillary, and they need earlier orthodontic intervention.

The size of the mandible reflects the type of energy ingested: decreases “in masticatory stress among agriculturalists causes the mandible to grow and develop differently” (Cramon-Taubadel, 2011). This effect would not only be seen in an evolutionary context. Cramon-Taubadel (2011) writes:

The results demonstrate that global patterns of human mandibular shape reflect differences in subsistence economy rather than neutral population history. This suggests that as human populations transitioned from a hunter-gatherer lifestyle to an agricultural one, mandibular shape changed accordingly, effectively erasing the signal of genetic relationships among populations.

So it seems like the change from a hunter-gatherer lifestyle to one based on plant/animal domestication had a significant effect on the mandible—and therefore teeth—of a population.

So teeth are a bone, and bones adapt. When an individual is young, the way their teeth, and subsequently jaw, are can be altered by diet. Eating hard or soft foods during adolescence can radically change the shape of the teeth (Liebermann, 2013). The harder the stuff one has to chew on will alter their facial morphology (i.e., their jaw and cheekbones) and, in turn, their teeth. This is because the teeth are bones and any stress put on them will change them. This, of course, speaks to the interaction of G and E (genes and environment). There are genes that contribute to differences in dental morphology between populations, and they impact the difference between ethnic/racial groups.

Further making the differences between these groups is what they choose to eat: the hardness or softness of the food they eat in adolescence and childhood can and will dictate the strength of one’s jaw and shape and strength of their teeth in adulthood, though racial/ethnic identification would still be possible.

Racial differences in dentition come down to evolution (development) and what and how much of the population in question eats. The differences in dentition between these populations are, in a way, dictated by what they eat in the beginning years of life. This critical period may dictate whether or not one has a strong or weak jaw. These differences come down to, like everything else, an interaction between G and E (genes and environment), such as the food one eats as an adolescent/baby which would then affect the formation of teeth in that individual. Of course, in countries that have a super-majority of one ethnic group over another, we can see what diet does to an individual in an ethnic group’s teeth.

There are quite striking differences in dentition between races/ethnic groups, and this can and will (along with other variables) lead to correctly identifying the race of an individual in question.

Black-White Differences in Bone Density

1100 words

I have written a few response articles to some of what Thompson has written over the past few years. He is most ridiculous when he begins to talk about nutrition (see my response to one of his articles on diet: Is Diet An IQ Test?). Now, in a review of Angela Saini’s (2019) new book Superior: The Return of Race Science, titled Superior Ideology, Thompson, yet again, makes more ridiculous assertions—this time about bone density as an adaptation. (I don’t care about what he says about race; though I should note that the debate will be settled with philosophy, not biology. Nor do I care about whatever else he says, I’m only concerned with his awful take on anatomy and physiology.)

The intellectually curious would ask: are there other adaptations which are not superficial? How about bone density?

https://academic.oup.com/jcem/article/82/2/429/2823249
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1863580/

Just-so story incoming.

I’m very familiar with these two papers. Let’s look at them both in turn.

The first study is Racial Differences in Bone Density between Young Adult Black and White Subjects Persist after Adjustment for Anthropometric, Lifestyle, and Biochemical Differences (Ettinger et al, 1997). Now, I did reference this article in my own piece on racial differences in drowning, though only to drive home the point that there are racial differences in bone density. Thompson is outright using this article as “evidence” that it is an adaptation.

In any case, Ettinger et al (1997) state that greater bone density in blacks may be due to differences in calciotropic hormones—hormones that play a major role in bone growth and bone remodeling. When compared with whites “black persons have lower urinary calcium excretion, higher 1,25-dihydroxyvitamin D (1, 25D) level, and lower 25-hydroxyvitamin D (25D) and osteocalcin level (9)” (Ettinger et al, 1997). They also state that bone density can be affected by calcium intake, physical activity, They also state that testosterone (an androgen) may account for racial and gender differences in bone density, writing “Two studies have demonstrated statistically significantly higher serum testosterone level in young adult black men (22) and women (23).”

Oh, wow. What are refs [22] and [23]? [22] is one of my favorites—Ross et al (1986) (read my response). To be brief, the main problems with Ross et al is that assay times were all over the place, along with it being a small convenience sample of 50 blacks and 50 whites. LabTests Online writes that it is preferred to assay in the morning while in a fasted state. In Ross et al, the assay times were between 10 am and 3 pm, which was a “convenient time” for the students. Along with the fact that the sample was small, this study should not be taken seriously regarding racial differences in testosterone, and, thus, racial differences in bone density.

Now what about [23]? This is another favorite of mine—Henderson et al (1988; of which Ross was a part of). Mazur (2016) shows that black women do not have higher levels of testosterone than white women. Furthermore, this is just like Ellis’ (2017) claims that there is a difference in prenatal androgen exposure, but that claim, too, fails. In any case, testosterone can’t explain differences in bone density between races.

Ettinger et al (1997) showed that blacks had higher levels of bone density than whites in all of the sites they looked at. (Though they also used skin-fold testing, which is notoroiously bad at measuring body composition in blacks; see Vickery, Cureton, and Collins, 1988). However, Ettinger et al (1997) did not claim, nor did they imply that bone density is an adaptation.

Now, getting to the second citation, Hochberg (2007). Hochberg (2007) is a review of differences in bone mineral density (BMD) between blacks and whites. Unfortunately, there is no evidence in this paper, either, that BMD is an adaptation. Hochberg (2007) gives numerous reasons why blacks would have stronger skeletons than whites, and neither is that they are an “adaptation”:

Higher bone strength in blacks could be due to several factors including development of a stronger skeleton during childhood and adolescence, slower loss of bone during adulthood due to reduced rates of bone turnover and greater ability to replace lost bone due to better bone formation. Bell and colleagues reported that black children had higher bone mass than white children and that this difference persisted into young adulthood, at least in men (,). Development of a stronger skeleton during childhood and adolescence is dependent on the interaction of genetic and environmental factors, including nutrition and lifestyle factors ().

[…]

Genetic, nutritional, lifestyle and hormonal factors may contribute to differences in rates of bone turnover during adulthood

There are numerous papers in the literature that show that blacks have higher BMD than whites and that there are racial differences in this variable. However, the papers that Thompson has cited are not evidence. That trait T exists and there is a difference in trait T between G1 and G2 does not license the claim that the difference in trait T between G1 and G2 is “genetic.”

Thompson then writes:

Equally, how about differences in glomerular function, a measure of kidney health, for which the scores are adjusted for those of Black African descent, to account for their higher muscle mass? Muscle mass and bone density are not superficial characteristics. In conflicts it would be a considerable advantage to have strong warriors, favouring “hard survival”.

Here’s the just-so story.

Race adjustment for estimating glomerular filtration rate (GFR) is not always needed (Zanocco et al, 2012). Renal function is measured by GFR. Renal function is an indication of the kidney’s functioning. Racial differences in kidney function exist, even in cases where the patients do not have CKD (chronic kidney disease) (Peralta et al, 2011). Black Americans also constitute 35 percent of all patients in America receiving kidney dialysis, despite being only 13 percent of the US population. Blacks do generate higher levels of creatinine compared to whites, and this is due to higher average muscle mass when compared with whites.

There are differences in BMD and muscle mass between blacks and whites which is established by young adulthood (Popp et al, 2017), but the claim that there trait T is an adaptation because trait T exists and there is a difference between G1 and G2 is unfounded. It’s simply a just-so story, using the old EP reverse engineering. The two papers referenced by Thompson are not evidence that the BMD is an adaptation, it only shows that there are racial differences in the trait. That there are racial differences in the two traits does not license the claim that the traits in question are an adaptation as Thompson seems to be claiming. The papers he refers to only note a difference between the two groups; it does not discuss the ultimate etiology of the difference between the groups, which Thompson does with his just-so story.

Book Review: “Lamarck’s Revenge”

3500 words

I recently bought Lamarck’s Revenge by paleobiologist Peter Ward (2018) because I went on a trip and needed something to read on the flight. I just finished the book the other day and I thought that I would give a review and also discuss Coyne’s review of the book since I know he is so uptight about epigenetic theories, like that of Denis Noble and Jablonka and Lamb. In Lamarck’s Revenge, Ward (2018) purports to show that Lamarck was right all along and that the advent of the burgeoning field is “Lamarck’s revenge” for those who—in the current day—make fun of his theories in intro biology classes. (When I took Bio 101, the professor made it a point to bring up Lamarck and giraffe necks as a “Look at this wrong theory”, nevermind the fact that Darwin was wrong too.) I will go chapter-by-chapter, give a brief synopsis of each, and then discuss Coyne’s review.

In the introduction, Ward discusses some of the problems with Darwinian thought and current biological understanding. The current neo-Darwinian Modern Synthesis states that what occurs in the lifetime of the organism cannot be passed down to further generations—that any ‘marks’ on the genome are then erased. However, recent research has shown that this is not the case. Numerous studies on plants and “simpler” organisms refute the notion, though for more “complex” organisms it has yet to be proved. However, that this discussion is even occurring is proof that we are heading in the right direction in regard to a new synthesis. In fact, Jablonka and Lamb (2005) showed in their book Evolution in Four Dimensions, that epigenetic mechanisms can and do produce rapid speciation—too quick for “normal” Darwinian evolution.

Ward (2018: 3-4) writes:

There are good times and bad times on earth, and it is proposed here that dichotomy has fueled a coupling of times when evolution has been mainly through Darwinian evolution and others when Lamarckian evolution has been dominant. Darwinian in good times, Lamarckian in bad, when bad can be defined as those times when our environments turn topsy-turvy, and do so quickly. When an asteroid hits the planet. When giant volcanic episodes create stagnant oceans. When a parent becomes a sexual predator. When our industrial output warms the world. When there are six billion humans and counting.

These examples are good—save the one about when a parent becomes a sexual predator (but if we accept the thesis that what we do  and what happens to us can leave marks on our DNA that don’t change it but are passed on then it is OK)—and they all point to one thing: when the environment becomes ultra-chaotic. When such changes occur in the environment, that organism needs a physiology that is able to change on-demand to survive (see Richardson, 2017).

Ward (2018: 8) then describes Lamarck’s three-step process:

First, an animal experienced a radical change of the environment aroujnd it. Second, the initial response to the environmental change was some new kind of behavior by that of the animal (or whole species). Third, the behavioral change was followed by morphological change in subsequent generations.

Ward then discusses others before Darwin—Darwin’s grandfather Erasmus, for instance—who had theories of evolution before Darwin. In any case, we went from a world in which a God created all to a world where everything we see was created by natural processes.

Then in Chapter 2, Ward discusses Lamarck and Darwin and each of their theories in turn. (Note that Darwin did have Lamarckian views too.) Ward discusses the intellectual dual between Lamarck and Georges Cuvier, the father of the field of comparative anatomy—he studied mass extinctions. At Lamarck’s funeral, Cuvier spoke bad about Lamarck and buried his theories. (See Cuvier’s (1836) Elegy of Lamarck.) These types of arguments between academics have been going on for hundreds of years—and they will not stop any time soon.

In Chapter 3 Ward discusses Darwin’s ideas all the way to the Modern Synthesis, discussing how Darwin formulated his theory of natural selection, the purported “mechanism of evolution.” Ward discusses how Darwin at first rejected Lamarck’s ideas but then integrated them into future editions of On the Origin. We can think of this scenario: Imagine any environment and organisms in it. The environment rapidly shifts to where it is unrecognizable. The organisms in that environment then need to either change their behavior (and reproduce) or die. Now, if there were no way for organisms to change, say, their physiology (since physiology is dependent on what is occurring in the outside environment), then the species would die and there would be no evolution. However, the advent of evolved physiologies changed that. Morphologic and physiologic plasticity can and does help organisms survive in new environments—environments that are “new” to the parental organism—and this is a form of Lamarckism (“heritable epigenetics” as Ward calls it).

Chapter 4 discusses epigenetics and a newer synthesis. In the beginning of the chapter, Ward discusses a study he was a part of (Vandepas, et al, 2016). (Read Ward’s Nautilus article here.)

They studied two (so-called) different species of nautilus—one, nautilus pampilus, widespread across the Pacific and Indian Oceans and two, Nautilus stenomphalus which is only found at the Great Barrier Reef. Pompilus has a hole in the middle of its shell, whereas stenomphalus has a plug in the middle. Both of these (so-called) species have different kinds of anatomy—Pompilus has a hood covered with bumps of flesh whereas stenomphalus‘ hood is filled with projections of moss-like twig structures. So over a thirty-day period, they captured thirty nautiluses and snipped a piece of their tentacles and sequences the DNA found in it. They found that the DNA of these two morphologically different animals was the same. Thus, although the two are said to be different species based on their morphology, genetically they are the same species which leads Ward (2018: 52) to claim “that perhaps there are fewer, not more, species on Earth than science has defined.” Ward (2018: 53) cites a recent example—the fact that the Columbian and North American wooly mammoths “were genetically the same but the two had phenotypes determined by environment” (see Enk et al, 2011).

Now take Ward’s (2018: 58) definition of “heritable epigenetics”:

In heritable epigenetics, we pass on the same genome, but one marked (mark is the formal term for the place that a methyl molecule attaches to one nucleotide, a rung in the ladder of DNA) in such a way that the new organism soon has its own DNA swarmed by these new (and usually unwelcome) additions riding on the chromosomes. The genotype is not changed, but the genes carrying the new, sucker-like methyl molecules change the workings of the organism to something new, such as the production (or lack thereof) of chemicals necessary for our good health, or for how some part of the body is produced.

Chapter 5 discusses different environments in the context of evolutionary history. Environmental catastrophes that lead to the decimation of most life on the planet are the subject—something that Gould wrote about in his career (his concept of contingency in the evolutionary process). Now, going back to Lamarck’s dictum (first an environmental change, second a change in behavior, and third a change in phenotype), we can see that these kinds of processes were indeed imperative in the evolution of life on earth. Take the asteroid impact (K-Pg extinction; Cretaceous-Paleogene) that killed off the dinosaurs and threw tons of soot into the air, blocking out the sun making it effectively night (Schulte et al, 2010). All organisms that survived needed to eat. If the organism only ate in the day time, it would then need to eat at night or die. That right there is a radical environmental change (step 1) and then a change in behavior (step 2) which would eventually lead to step 3.

In Chapter 6, Ward discusses epigenetics and the origins of life. The main subject of the chapter is lateral gene transfer—the transmission of different DNA between genomes. Hundreds or thousands of new genes can be inserted into an organism and effectively change the morphology, it is a Lamarckian mechanism. Ward posits that there were many kinds of “genetic codes” and “metabolisms” throughout earth’s history, even organisms that were “alive” but were not capable of reproducing and so they were “one-offs.” Ward even describes Margulis’ (1967) theory of endosymbiosis as “a Lamarckian event“, which even Margulis accepts. Thus, the evolution of organisms is possible through lateral gene transfer and is another Lamarckian mechanism.

Chapter 7 discusses epigenetics and the Cambrian explosion. Ward cites a Creationist who claims that there has not been enough time since the 500 million year explosion to explain the diversity of body plans since then. Stephen Jay Gould wrote a whole book on this—Wonderful Life. It is true that Darwinian theory cannot explain the diversity of body plans, nor even the diversity of species and their traits (Fodor and Piatelli-Palmarini, 2010), but this does not mean that Creationism is true. If we are discussing the diversification of organismal life after mass extinctions, then Darwinian evolution cannot have possibly played a role in the survival of species—organisms with adaptive physiologies would have had a better chance of surviving in these new, chaotic environments.

It is posited here that four different epigenetic mechanisms presumably contributed to the great increase in both the kinds of species and the kinds of morphologies that distinguished them that together produced the Cambrian explosion as we currently know it: the first, now familiar, methylation; second, small RNA silencing; third, changes in the histones, the scaffolding that dictates the overall shape of a DNA molecule; and, finally, lateral gene transfer, which has recently been shown to work in animals, not just microbes. (Ward, 2018: 113)

Ginsburg and Jablonka (2010) state that “[associative] learning-based diversification was
accompanied by neurohormonal stress, which led to an ongoing destabilization and re-patterning of the epigenome, which, in turn, enabled further morphological, physiological, and behavioral diversification.” So associative learning, according to Ginsburg and Jablonka, was the driver of the Cambrian explosion. Ward (2018: 115) writes:

[The paper by Ginsburg and Jablonka] says that changes of behavior by both animal predators and animal prey began as an “arms race” in not just morphology but behavior. Learning how to hunt or flee; detecting food and mats and habitats at a distance from chemical senses of smell or vision, or from deciphering vibrations coming through water. Yet none of that would matter if the new behaviors and abilities were not passed on. As more animal body plans and the species they were composed of appeared, ecological communities changed radically and quickly. The epigenetic systems in snimals were, according to the authors, “destabilized,” andin reordering them it allowed new kinds of morphology, physiology, and again behavior, ans amid this was the ever-greater use of powerful hormone systems. Seeinf an approaching predator was not enough. The recognition of imminent danger would only save an animal’s life if its whole body was alerted and put on a “war footing” by the flooding of the creature with stress hormones. Poweful enactors of action. Over time, these systems were made heritable and, according to the authors, the novel evolution of fight or flight chemicals would have greatly enhanced survivability and success of early animals “enabled animals to exploit new niches, promoted new types of relations and arms races, and led to adaptive repsonses that became fixed through genetics.”

That, and vision. Brains, behavior, sense organs and hormones are tied to the nervous system to the digestive system. No single adaption led to animal success. It was the integration of these disparate systems into a whole that fostered survivability, and fostered the rapid evolution of new kinds of animals during the evolutionary fecund Cambrian explosion.

So, ever-changing environments are how physiological systems evolved (see Richardson, 2017: Chapters 4 and 5). Therefore, if the environment were static, then physiologies would not have evolved. Ever-changing environments were imperative to the evolution of life on earth. For if this were not the case, organisms with complex physiologies (note that a physiological system is literally a whole complex of cells) would never have evolved and we would not be here.

In chapter 8 Ward discusses epigenetic processes before and after mass extinctions. He states that, to mass extinction researchers, there are 3 ways in which mass extinction have occurred: (1) asteroid or comet impact; (2) greenhouse mass extinction events; and (3) glaciation extinction events. So these mass extinctions caused the emergence of body plans and new species—brought on by epigenetic mechanisms.

Chapter 9 discusses good and bad times in human history—and the epigenetic changes that may have occurred. Ward (2018: 149) discusses the Toba eruption and that “some small group of survivors underwent a behavioral change that became heritable, producing cultural change that is difficult to overstate.” Environmental change leads to behavioral change which eventually leads to change in morphology, as Lamarck said, and mass extinction events are the perfect way to show what Lamarck was saying.

In chapter 10 Ward discusses epigenetics and violence, the star of the chapter being MAOA. Take this example from Ward (2018: 167-168):

Causing violent death or escaping violent death or simply being subjected to intense violence causes significant flooding of the body with a whole pharmacological medicine chest of proteins, and in so doing changes the chemical state of virtually every cell. The produces epigenetic change(s) that can, depending on the individual, create a newly heritable state that is passed on to the offspring. The epigenetic change caused by the fight-or-flight response may cause progeny to be more susceptible to causing violence.

Ward then discsses MAOA (pg 168-170), though read my thoughts on the matter. (He discusses the role of epigenetics in the “turning on” of the gene. Child abuse has been shown to cause epigenetic changes in the brain (Zannas et al, 2015). (It’s notable that Ward—rightly—in this chapter dispenses with the nature vs. nurture argument.)

In Chapter 11, Ward discusses food and famine changing our DNA. He cites the most popular example, that of the studies done on survivors who bore children during or after the famine. (I have discussed this at length.) In September of 1944, the Dutch ordered a nation-wide railroad strike. The Germans then restricted food and medical access to the country causing the deaths of some 20,000 people and harming millions more. So those who were in the womb during the famine had higher rates of disorders such as obesity, anorexia, obesity, and cardiovascular incidences.

However, one study showed that if one’s father had little access to food during the slow growth period, then cardiovascular disease mortality was low. But diabetes mortality was high when the paternal grandfather was exposed to excess food. Further, when SES factors were controlled for, the difference in lifespan was 32 years, which was dependent on whether or not the grandfather was exposed to an overabundance of food or lack of abundance of food just before puberty.

Nutrition can alter the epigenome (Zhang and Kutateladze, 2018), since it can alter the epigenome and the epigenome is heritable, then these changes can be passed on to future generations too.

Ward then discusses the microbiome and epigenetics (read my article for a primer on the microbiome, what it does, and racial differences in it). The microbiome has been called “the second genome” (Grice and Segre, 2012), and so, any changes to the “second genome” can also be passed down to subsequent generations.

In Chapter 12, Ward discusses epigenetics and pandemics. Seeing people die from horrible diseases of course has horrible effects on people. Yes, there were evolutionary implications from these pandemics in that the gene pool was decreased—but what of the effects on the survivors? Methylation impacts behavior and behavior impacts methylation (Lerner and Overton, 2017), and so, differing behaviors after such atrocities can be tagged on the epigenome.

Ward then takes the discussion on pandemics and death and shifts to religion. Imagine seeing your children die, would you not want to believe that there was a better place for them after death to—somewhat—quell your sorrow over their loss? Of course, having an epiphany about something (anything, not just religon) can change how you view life. Ward also discusses a study where atheists had different brain regions activated even while no stimulation was presented. (I don’t like brain imaging studies, see William Uttal’s books and papers.) Ward also discusses the VMAT2 gene, which “controls” mood through the production of the VMAT protein, elevating hormones such as dopamine and serotonin (similar to taking numerous illegal drugs).

Then in Chapter 13 he discusses chemicals and toxins and how they relate to epigenetic processes. These kinds of chemicals and toxins are linked with changes in DNA methylation, miroRNAs, and histone modifications (Hou et al, 2012). (Also see Tiffon, 2018 for more on chemicals and how they affect the epigenome.)

Finally, in Chapter 14 Ward discusses the future of evolution in a world with CRISPR-CAS9. He discusses many ways in which the technology can be useful to us. He discusses one study in which Chinese scientists knocked out the myostatin gene in 65 dog embryos. Twenty-seven of the dogs were born and only two—a male and a female—had both copies of the myostatin gene disrupted. This is just like when researchers made “double-muscle” cattle. See my article ‘Double-Muscled’ Humans?

He then discusses the possibility of “supersoldiers” and if we can engineer humans to be emotionless killing machines. Imagine being able to engineer humans that had no sympathy, no empathy, that looked just like you and I. CRISPR is a tool that uses epigenetic processes and, thus, we can say that CRISPR is a man-made Lamarckian mechanism of genetic change (mimicking lateral gene transfer).

Now, let’s quickly discuss Coyne’s review before I give my thoughts on the book. He criticizes Ward’s article linked above (Coyne admits he did not read the book), stating that his claim that the two nautiluses discussed above being the same species with the same genome and epigenetic forces leading to differences in morphology (phenotype). Take Coyne’s critique of Vandepas, et al, 2016—that they only sequenced two mitochondrial genes. Combosch et al (2017; of which Ward was a coauthor) write (my emphasis):

Moreover, previous molecular phylogenetic studies indicate major problems with the conchiological species boundaries and concluded that Nautilus represents three geographically distinct clades with poorly circumscribed species (Bonacum et al, 2011; Ward et al, 2016). This is been reiterated in a more recent study (Vandepas et al, 2016), which concluded that N. pompilius is a morphologically variable species and most other species may not be valid. However, these studies were predominantly or exclusively based on mitochondrial DNA (mtDNA), an informative but often misleading marker for phylogenetic inference (e.g., Stöger & Schrödl 2013) which cannot reliably confirm and/or resolve the genetic composition of putative hybrid specimens (Wray et al, 1995).

Looks like Coyne did not look hard enough for more studies on the matter. In any case, it’s not just Ward that makes this argument—many other researchers do (see e.g., Tajika et al, 2018). So, if there is no genetic difference between these two (so-called) species, and they have morphological differences, then the possibility that seems likely is that the differences in morphology are environmentally-driven.

Lastly, Coyne was critical of Ward’s thoughts on the heritability of histone modification, DNA methylation, etc. It seems that Coyne has not read the work of philosopher Jan Baedke (see his Google Scholar page), specifically his book Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics along with the work of sociologist Maurizio Meloni (see his Google Scholar page), specifically his book Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics. If he did, Coyne would then see that his rebuttal to Ward makes no sense as Baedke discusses epigenetics from an evolutionary perspective and Meloni discusses epigenetics through a social, human perspective and what can—and does—occur in regard to epigenetic processes in humans.

Coyne did discuss Noble’s views on epigenetics and evolution—and Noble responded in one of his talks. However, it seems like Coyne is not aware of the work of Baedke and Meloni—I wonder what he’d say about their work? Anything that attacks the neo-Darwinian Modern Synthesis gets under Coyne’s skin—almost as if it is a religion for him.

Did I like the book? I thought it was good. Out of 5 stars, I give it 3. He got some things wrong, For instance, I asked Shea Robinson, author of Epigenetics and Public Policy: The Tangled Web of Science and Politics about the beginning of the book and he directed me to two articles on his website: Lamarck’s Actual Lamarckism (or How Contemporary Epigenetics is not Lamarckian) and The Unfortunate Legacy of Jean-Baptiste Lamarck. The beginning of the book is rocky, the middle is good (discussing the Cambrian explosion) and the end is alright. The strength of the book is how Ward discusses the processes that epigenetics occurs by and how epigenetic processes can occur—and help drive—evolutionary change, just as Jablonka and Lamb (1995, 2005) argue, along with Baedke (2018). The book is a great read, if only for the history of epigenetics (which Robinson (2018) goes into more depth, as does Baedke (2018) and Meloni (2019)).

Lamarck’s Revenge is a welcome addition to the slew of books and articles that go against the Modern Synthesis and should be required reading for those interested in the history of biology and evolution.

HBD and Sports: Basketball

1600 words

In the past, I have written on the subject of HBD and sports (it is a main subject of this blog). I have covered baseball, football, running, bodybuilding, and strength over many articles. Though, I have not covered basketball yet. Black Americans comprised 74.4 percent of the NBA, compared to 19.1 percent of whites (TIDES, 2017). Why do blacks dominate the racial composition of baskeball? Height is strongly related to success in basketball, though whites and blacks are around the same height, with blacks being slightly shorter (blacks being 69.4 inches compared to whites who were 69.8 inches; CDC, 2012). So, why do blacks dominate basketball?

Basketball success isn’t predicated so much on height, rather, limb length plays more of a factor in basketball success. Blacks have longer limbs than whites (Wagner and Heyward, 2000Bejan, Jones, and Charles, 2010). The average adult man has an arm span about 2.1 inches greater than his height (Nwosu and Lee, 2008), while Monson, Brasil, and Hlusko (2018) state that taller basketball players had a greater height-to-wingspan ratio and they were, therefore, more successful. The Bleacher Report reports that:

The average NBA Player’s wingspan differential came out at 4.3 percent, so anything above that is going to be reasonably advantageous.

So, more successful basketball players have a longer arm span compared to their height, which makes them more successful in the sport. Blacks have longer limbs than whites, even though they are on average the same height. Thus, one reason why blacks are more successful than whites at basketball is due to their somatotype—their long limbs, specifically,

David Epstein (2014: 129) writes in The Sports Gene:

Based on data from the NBA and NBA predraft combines (using only true, shoes-off measurements of players), the Census Bureaum abd the Centers for Disease Control’s National Center for Health Statistics, there is such a premium on extra height for NBA that the probability of an American man between the ages of twenty and forty being a current NBA player rises nearly a full order of magnitude with every two-inch increase in height starting at six feet. For a man between six feet and 6’2”, the chance of his currently being in the NBA is five in a million. At 6’2” to 6’4”, that increases to twenty in a million. For a man between 6’10” and seven feet tall, it rises to thirty-two thousand in a million, or 3.2 percent. An American man who is seven feet tall is such a rarity that the CDC does not even list a height percentile at that stature. But the NBA measurements combined with the curve formed by the CDC’s data suggest that of American men ages twenty to forty who stand seven feet tall, a startling 17 percent of them are in the NBA right now.* Find six honest seven-footers, and one will be in the NBA.

* Many of the men who NBA rosters claim are seven feet tall prove to be an inch or even two inches shorter when measured at the combine with their shows off. Shaquille O’Neal, however, is a true 7’1” with his shoes off.

And on page 132 he writes:

The average arm-span-to-height ratio of an NBA player is 1.063. (For medical context, a ratio greater than 1.05 is one of the traditonal diagnostic criteria for Marfan syndrome, the disorder of the body’s connective tissues that results in elongated limbs.) An average-height NBA player, one who is about 6’7”, has a wingspan of seven feet.

So we can clearly see that NBA players, on average, are freaks of nature when it comes to limb length, having freakish arm length proportions which is conducive to success in basketball.

Why are long limbs so conducive to basketball success? I can think of a few reasons.

(1) The taller one is and the longer one’s limbs are the less likely they are to have a blocked shot.

(2) The taller one is and the longer one’s limbs are is advantageous when performing a lay-up.

(3) The taller one is and the longer one’s limbs are means they can battle for rebounds at better than a shorter man with shorter limbs.

Epstein (2014: 136) also states that the predraft data shows that the average white NBA player is 6’7.5” with a wingspan of 6’10” while the average black NBA player is 6’5.5” with an average wingspan of 6’11”—meaning that blacks were shorter but “longer.” What this means is that blacks don’t play at “their height”—they play as if they were taller due to their wingspan.

Such limb length differences are a function of climate. Shorter, stockier bodies (i.e., an endomorphic somatotype) is conducive to life in colder climes, whereas longer, more narrowbodies (ecto-meso) are conducive to life in the tropics. Endomorphic somas are conducive to  life in colder climes because there is less surface area to keep warm—and this is seen by looking at those whose ancestors evolved in cold climes (Asians, Inuits)—shorter, more compact bodies retain more heat. Conversely, ecto-meso somas are conducive to life in hotter, more tropical climes since this type of body dissipates heat more efficiently than endo somas (Lieberman, 2015). So, blacks are more likely to have the soma conducive to basketball success due to where their ancestors evolved.

So, now we have discussed the facts that height and limb length are conducive to success in basketball. Although blacks and whites in America are the same height, they have vastly different average limb lengths, as numerous studies attest to. These average differences in limb length are how and why blacks succeed far better than whites in the NBA.

Athleticism is irreducible to biology (Lewis, 2004), as has been argued in the past. However, that does not mean that there are NOT traits that are conducive to success in basketball and other sports. Both height and limb length are related: more than likely, the taller one is, the longer their limbs are relative to their height. This is what we see in elite NBA players. Height, will, altitude, and myriad other factors combine to create the elite NBA phenotype; height seems to be a necessary—not sufficient—condition for basketball success (since one can be successful at basketball without the freakish heights of the average player). Though, as Epstein wrote in his book, both height and limb length are conducive to success in basketball, and it just so happens that blacks have longer limbs than whites which of course translates over to their domination in basketball.

Contrary to popular belief, though, players coming from broken homes and an impoverished life are not the norm. As Dubrow and Adams (2010) write:

We find that, after accounting for methodological problems common in newspaper data, most NBA players come from relatively advantaged social origins and African Americans from disadvantaged social origins have lower odds of being in the NBA than African American and white players from relatively advantaged origins.

Sports writer Peter Keating writes that:

[Dubrow and Adams] found that among African-Americans, a child from a low-income family has 37 percent lower odds of making the NBA than a child from a middle- or upper-income family. Poor white athletes are 75 percent less likely to become NBA players than middle-class or well-off whites. Further, a black athlete from a family without two parents is 18 percent less likely to play in the NBA than a black athlete raised by two parents, while a white athlete from a non-two-parent family has 33 percent lower odds of making the pros. As Dubrow and Adams put it, “The intersection of race, class and family structure background presents unequal pathways into the league.”

(McSweeney, 2008 also has a nice review of the matter.)

Turner et al (2015) state that black males were more likely to play basketball than whites males. Higher-income boys were more likely to play baseball, whereas lower-income boys were more likely to play basketball. Though, it seems that when it comes to elite basketball success, players seem to come from higher-income homes.

Therefore, to succeed in basketball, one needs height and long limbs to succeed in basketball. Contrary to popular belief, it is less likely for an NBA player to come from a low-income family—they come from middle-class families the most. Indeed, those who come from lower-income families, even if they have the skill, most likely won’t have the money to develop the talent they have. Though there are some analyses which point to basketball being played by lower-income children—and I have no reason to disagree with them—when it comes to professional play, both blacks and whites are less likely to become NBA players if they grew up in poverty.

The limb length differences between blacks and whites which are conducive to sport success are a function of the climate that their ancestors evolved in. Now, although athleticism is irreducible to biology (because biological and cultural factors interact to create the elite athletic phenotype), that does not mean that there are no traits conducive to sporting success. Quite the opposite: A taller player would more often than not beat a shorter player; when it comes to players with the same height and different limb lengths, the one with the longer limbs will stand a better chance at beating the one with shorter limbs. Blacks and whites have different limb lengths, and this explains how and why blacks are more successful at basketball than whites. Cultural and biological factors combine in order to cause what one is good at.

Basketball is huge in the black community (due in part to people gravitating toward what they are good at), and due to this, since blacks have an advantage right out of the gate, they will gravitate more toward the sport and, therefore, height and limb length is a huge reason why black dominate at this sport.