NotPoliticallyCorrect
Please keep comments on topic.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 215 other followers

Follow me on Twitter

Charles Darwin

Denis Noble

JP Rushton

Richard Lynn

Linda Gottfredson

Goodreads

Advertisements

High IQ Societies

1500 words

The most well-known high IQ society (HIS hereafter) is Mensa. But did you know that there are many more—much more exclusive—high IQ societies? In his book The Genius in All of Us: Unlocking Your Brain’s Potential (Adam, 2018) Adam chronicles his quest to raise his IQ score using nootropics. (Nootropics are supposed brain-enhancers, such as creatine that supposedly help in increasing cognitive functioning.) Adam discusses his experience taking the Mensa test (Mensa “is Mexican slang for stupid woman“; Adam, 2018) and talking to others who did with him on the same day. One highschool student he talked to wanted to put that he was a Mensa member on his CV; yet another individual stated that they accepted a challenge from a family member, since other members were in Mensa, she wanted to show that she had what it took.

Adam states that they were handed two sheets of paper with 30 questions, to be answered in three or four minutes, with questions increasing in difficulty. The first paper, he says, had a Raven-like aspect to it—rotating shapes and choosing the correct shape that’s next in the sequence. But, since he was out of time for the test, he says that he answered “A” to the remaining questions when the instructor wasn’t looking, since he “was going to use cognitive enhancement to cheat later anyway” (Adam, 2018: 23). (I will show Adam’s results of his attempted “cognitive enhancement to cheat” on the Mensa exam at the end of this article.) The shapes-questions were from the first paper, and the second was verbal. On this part, some words had to be defined while others had to be placed into context, or be placed into a sentence in the right place. Adam (2018: 23) gives an example of some of the verbal questions:

Is ‘separate’ the equivalent of ‘unconnected’ or ‘unrelated’? Or ‘evade’ — is it the same as ‘evert’, ‘elude’ or ‘escape’?

[Compare to other verbal questions on standard IQ tests:

‘What is the boiling point of water?’ ‘Who wrote Hamlet?’ ‘In what continent is Egypt?’ (Richardson, 2002: 289)

and

‘When anyone has offended you and asks you to excuse him—what ought you do?’ ‘What is the difference between esteem and affection?’ [this is from the Binet Scales, but “It is interesting to note that similar items are still found on most modern intelligence tests” (Castles, 2013).]]

So it took a few weeks for Adam’s results to get delivered to his home. His wife opened the letter and informed him that he had gotten into Mensa. (He got in despite answering “A” after the time limit was up.) This, though, threw a wrench into his plans: his plan was to use cognitive enhancers (nootropics) to enhance his cognition and attempt to score higher and get into Mensa that way. However, there are much more exclusive IQ clubs than Mensa. Adam (2018: 30) writes:

Under half of the Mensa membership, for example, would get into the Top One Percent Society (TOPS). And fewer than one in ten of those TOPS members would make the grade at the One in a Thousand Society. Above that the names get cryptic and the spelling freestyle.

There’s the Epida society, the Milenija, the Sthiq Society, and Ludomind. The Universal Genius Society takes just one person in 2,330, and the Ergo Society just one in 31,500. Members of the Mega Society, naturally, are one in a million. The Giga Society? One in a billion, which means, statistically, just seven people on the planet are qualified to join. Let’s hope the know about it. If you are friends with one of them, do tell them.

At the top of the tree is the self-proclaimed Grail Society, which sets its membership criteria so high — one in 76 billion — that it currently has zero members. It’s run by Paul Cooijmans, a guitarist from the Netherlands. About 2,000 people have tried and failed to join, he says. ‘Be assured that no one has come close.’

Wow, what exclusive clubs! Mensans are also more likely to have “psychological and physiological overexcitabilities” (Karpinski et al, 2018) such as ADHD, autism, and other physiologic diseases. How psycho and socially awkward a few members of Mensa are is evidenced in this tweet thread.

hahamensa

How spooooky. Surely the high IQ Mensans have un-thought-of ways of killing that us normies could never fathom. And surely, with their high IQs, they can outsmart the ones who would attempt to catch them for murder.

A woman named Jamie Loftus got into Mensa and she says that you get a discount on Hertz car rentals, a link to the Geico insurance website, you get access to the Mensa dating site “Mensa Match” (there is also an “IQ” dating site called https://youandiq.com/), an email address, a cardboard membership card, and access to Mensa events in your area. Oh, and of course, you have to pay to take the test and pay yearly to stay in. (Also read Loftus’ other articles on her Mensa experience: one where she describes the death threats she got, and another in which she describes how Mensans would like her to not write bad things about them (Mensans). Seems like Mensans are in their “feels” about being attacked for their little—useless—club.)

One of the founders of Mensa—Lancelot Ware—stated that he “get[s] disappointed that so many members spend so much time solving puzzles” (quoted in Tammet, 2009: 40). If Mensa were anything but members [who] spend so much time solving puzzles“, then I think Ware would have stated as much. While the other founder of Mensa—Ronald Berrill— “had intended Mensa as “an aristocracy of the intellect”, and was unhappy that a majority of Mensans came from humble homes” (the Wikipedia article on Mensa International cites Serebriakoff, 1986 as the reference for the quote).

So, when it comes to HISs, what do they bring to the world? Or is it just a dues-paid club so that the people on top can get money from people attempting to stroke their egos saying “Yea, I scored high on a test and am in a club!”
The supervisor of the Japanese Intelligence Network (JIN) writes (his emphasis):

Currently, the ESOTERIQ society has seven members and the EVANGELIQ has one member.

I can perfectly guarantee that the all members exactly certainly undoubtedly absolutely officially keep authentic the highest IQ score performances.

Especially, the EVANGELIQ is the most exclusive high IQ society which has at least one member.

Do you think the one member of EVANGELIQ talks to himself a lot? From the results of Karpinski et al (2018), I would hazard the guess that, yes, he does. Here is a list of 84 HISs, and there is an even more exclusive club than the Grail Society: the Terra Society (you need to score 205 on the test where the SD is 15 to join).

So is there a use for high IQ societies? I struggle to think of one. They seem to function as money-sinks—to sucker people into paying their dues just because they scored high on a test (with no validity). The fact that one of the founders of Mensa was upset that Mensa members spend so much time doing puzzles is very telling. What else do they do with their ‘talent’ other than solve puzzles all day? What has the Mensa group—and any of the other (quite possible, but 84 are linked above) hundreds of HISs—done for the world?

Adam—although he guessed at the end of the first Mensa exam (the Raven-like one)—got into Mensa due to his second Mensa test—the verbal one. Adam eventually retook the Mensa exam after taking his nootropic cocktails and he writes (2018: 207):

The second envelope from Mensa was waiting for me when I returned from work, poking out beneath a gas bill. I opened the gas bill first. Its numbers were higher than I expected. I hoped the same would be true of the letter that announced my new IQ.

It was. My cognitively enhanced score on the language test had crept up to 156, from 154 before. And on the Culture Fair Test [the Raven-like test], the tough one with the symbols, it had soared to 137, from 128. That put me on the ninety-ninth percentile on both.

My IQ as measured by the symbols test — the one I had tried to improve on using the brain stimulation — was now 135, up from 125, and well above the required threshold for Mensa Membership.

Adam used Modafinil (a drug used to treat sleeplessness due to narcolepsy, obstructive sleep apnea, and shift work sleep disorder) and electrical brain stimulation. So Adam increased his scores, but he—of course—has no idea what causes his score increases: the nootropic, the electrical stimulation, practice, already having an idea of what was on the test, etc.

In any case, that’s ancillary to the main discussion point in this article: What has Mensa—and other HISs—done for the world? Out of the hundreds of HISs in the world, have they done anything of note or are they just a club of people who score highly on a test who then have to pay money to be in the club? There is no value to these kinds of ‘societies’; they’re just a circlejerk for good test-takers. Mensans have a higher chance of having mental disorders, which is evidenced by the articles above by Jamie Loftus, where they threaten her life with their “criminal element”.

So, until I’m shown otherwise, Mensa and other HISs are just a circlejerk where people have to pay to be in the club—and that’s all it is.

Advertisements

The “Interactionism Fallacy”

2350 words

A fallacy is an error in reasoning that makes an argument invalid. The “interactionism fallacy” is the fallacy—coined by Gottfredson (2009)—that since genes and environment interact, that heritability estimates are not useful—especially for humans (they are for nonhuman animals where environments can be fully controlled; see Schonemann, 1997; Moore and Shenk, 2016). There are many reasons why this ‘fallacy’ is anything but a fallacy; it is a simple truism: genes and environment (along with other developmental products) interact to ‘construct’ the organism (what Oyama, 2000 terms ‘constructive interactionism—“whereby each combination of genes and environmental influences simultaneously interacts to produce a unique result“). The causal parity thesis (CPT) is the thesis that genes/DNA play an important role in development, but so do other variables, so there is no reason to privilege genes/DNA above other developmental variables (see Noble, 2012 for a similar approach). Genes are not special developmental resources and so, nor are they more important than other developmental resources. So the thesis is that genes and other developmental resources are developmentally ‘on par’.

Genes need the environment. Without the environment, genes would not be expressed. Behavior geneticists claim to be able to partition genes from environment—nature from nurture—on the basis of heritability estimates, mostly gleaned from twin and adoption studies. However, the method is flawed: since genes interact with the environment and other genes, how would it be possible to neatly partition the effects of genes from the effects of the environment? Behavior geneticists claim that we can partition these two variables. Behavior geneticists—and others—cite the “Interactionism fallacy”, the fallacy that since genes interact with the environment that heritability estimates are useless. This “fallacy”, though, confuses the issue.

Behavior geneticists claim to show how genes and the environment affect the ontogeny of traits in humans with twin and adoption studies (though these methods are highly flawed). The purpose of this “fallacy” is to disregard what developmental systems theorists claim about the interaction of nature and nurture—genes and environment.

Gottfredson (2009) coins the “interactionism fallacy”, which is “an irrelevant truth [which is] that an organism’s development requires genes and environment to act in concert” and the “two forces are … constantly interacting” whereas “Development is their mutual product.” Gottfredson also states that “heritability … refers to the percentage of variation in … the phenotype, which has been traced to genetic variation within a particular population.” (She also makes the false claim that “One’s genome is fixed at birth“; though this is false, see epigenetics/methylation studies.) Heritability estimates, according to Phillip Kitcher are “‘irrelevant’ and the fact that behavior geneticists persist
in using them is ‘an unfortunate tic from which they cannot free themselves’ (Kitcher,
2001: 413)” (quoted in Griffiths, 2002).

Gottfredson is engaging in developmental denialism. Developmental denialismoccurs when heritability is treated as a causal mechanism governing the developmental reoccurrence of traits across generations in individuals.” Gottfredson, with her “interactionism fallacy” is denying organismal development by attempting to partition genes from environment. As Rose (2006) notes, “Heritability estimates are attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.” The nature vs nurture argument is over and neither has won—contra Plomin’s take—since they interact.

Gottfredson seems confused, since this point was debated by Plomin and Oyama back in the 80s (Plomin’s review of Oyama’s book The Ontogeny of Information; see Oyama, 1987, 1988; Plomin, 1988a, b). In any case, it is true that development requires genes to interact. But Gottfredson is talking about the concept of heritability—the attempt to partition genes and environment through twin, adoption and family studies (which have a whole slew of problems). For example, Moore and Shenk (2016: 6) write:

Heritability statistics do remain useful in some limited circumstances, including selective breeding programs in which developmental environments can be strictly controlled. But in environments that are not controlled, these statistics do not tell us much.

Susan Oyama writes in The Ontogeny of Information (2000, pg 67):

Heritability coefficients, in any case, because they refer not only to variation in genotype but to everything that varied (was passed on) with it, only beg the question of what is passed on in evolution. All too often heritability estimates obtained in one setting are used to infer something about an evolutionary process that occurred under conditions, and with respect to a gene pool, about which little is known. Nor do such estimates tell us anything about development.

Characters are produced by the interaction of nongenetic and genetic factors. The biological flaw, as Moore and Shenk note, throw a wrench into the claims of Gottfredson and other behavior geneticists. Phenotypes are ALWAYS due to genetic and nongenetic factors interacting. So the two flaws of heritability—the environmental and biological flaw (Moore and Shenk, 2016)—come together to “interact” to refute such simplistic claims that genes and environment—nature and nurture—can be separated.

For instance, as Moore (2016) writes, though “twin study methods are among the most powerful tools available to quantitative behavioral geneticists (i.e., the researchers who took up Galton’s goal of disentangling nature and nurture), they are not satisfactory tools for studying phenotype development because they do not actually explore biological processes.” (See also Richardson, 2012.) This is because twin studies ignore biological/developmental processes that lead to phenotypes.

Gamma and Rosenstock (2017) write that the concept of heritability that behavioral geneticists use is “is a generally useless quantity” while “the behavioral genetic dichotomy of genes vs environment is fundamentally misguided.” This brings us back to the CPT; there is causal parity to all processes/interactants that form the organism and its traits, thus the concept of heritability that behavioral geneticists employ is a useless measure. Oyama, Griffiths, and Gray (2001: 3) write:

These often overlooked similarities form part of the evidence for DST’s claim of causal parity between genes and other factors of development. The “parity thesis” (Griffiths and Knight 1998) does not imply that there is no difference between the particulars of the causal roles of genes and factors such as endosymbionts or imprinting events. It does assert that such differences do not justify building theories of development and evolution around a distinction between what genes do and what every other causal factor does.

Behavior geneticists’ endeavor, though, is futile. Aaron Panofsky (2016: 167) writes that “Heritability estimates do not help identify particular genes or ascertain their functions in development or physiology, and thus, by this way of thinking, they yield no causal information.” (Also see Panofsky, 2014; Misbehaving Science: Controversy and the Development of Behavior Genetics.) So, the behavioral genetic method of partitioning genes and environment does not—and can not—show causation for trait ontogeny.

Now, while people like Gottfredson and others may deny it, they are genetic determinists. Genetic determinism, as defined by Griffiths (2002) is “the idea that many significant human characteristics are rendered inevitable by the presence of certain genes.” Using this definition, many behavior geneticists and their sympathizers have argued that certain traits are “inevitable” due to the presence of certain genes. Genetic determinism is literally the idea that genes “determine” aspects of characters and traits, though it has been known for decades that it is false.

Now we can take a look at Brian Boutwell’s article Not Everything Is An Interaction. Boutwell writes:

Albert Einstein was a brilliant man. Whether his famous equation of E=mc2 means much to you or not, I think we can all concur on the intellectual prowess—and stunning hair—of Einstein. But where did his brilliance come from? Environment? Perhaps his parents fed him lots of fish (it’s supposed to be brain food, after all). Genetics? Surely Albert hit some sort of genetic lottery—oh that we should all be so lucky. Or does the answer reside in some combination of the two? How very enlightened: both genes and environment interact and intertwine to yield everything from the genius of Einstein to the comedic talent of Lewis Black. Surely, you cannot tease their impact apart; DNA and experience are hopelessly interlocked. Except, they’re not. Believing that they are is wrong; it’s a misleading mental shortcut that has largely sown confusion in the public about human development, and thus it needs to be retired.

[…]

Most traits are the product of genetic and environmental influence, but the fact that both genes and environment matter does not mean that they interact with one another. Don’t be lured by the appeal of “interactions.” Important as they might be from time to time, and from trait to trait, not everything is an interaction. In fact, many things likely are not.

I don’t even know where to begin here. Boutwell, like Gottfredson, is confused. The only thing that needs to be retired because it “has largely sown confusion in the public about human development” is, ironically, the concept of heritability (Moore and Shenk, 2016)! I have no idea why Boutwell claimed that it’s false that “DNA and experience [environment] are hopelessly interlocked.” This is because, as Schneider (2007) notes, “the very concept of a gene requires an environment.” Since the concept of the gene requires the environment, how can we disentangle them into neat percentages like behavior geneticists claim to do? That’s right: we can’t. Do be lured by the appeal of interactions; all biological and nonbiological stuff constantly interacts with one another.

Boutwell’s claims are nonsense. It would be worth it to quote Richard Lewontin’s forward in the 2000 2nd edition of Susan Oyama’s The Ontogeny of Information (emphasis Lewontin’s):

Nor can we partition variation quantitatively, ascribing some fraction of variation to genetic differences and the remainder to environmental variation. Every organism is the unique consequence of the reading of its DNA in some temporal sequence of environments and subject to random cellular events that arise because of the very small number of molecules in each cell. While we may calculate statistically an average difference between carriers of one genotype and another, such average differences are abstract constructs and must not be reified with separable concrete effects of genes in isolation from the environment in which the genes are read. In the first edition of The Ontogeny of Information Oyama characterized her construal of the causal relation between genes and environment as interactionist. That is, each unique combination of genes and environment produces a unique and a priori unpredictable outcome of development. The usual interactionist view is that there are separable genetic and environmental causes, but the effects of these causes acting in combination are unique to the particular combination. But this claim of ontogenetically independent status of the causes as causes, aside from their interaction in the effects produced, contradicts Oyama’s central analysis of the ontogeny of information. There are no “gene actions” outside environments, and no “environmental actions” can occur in the absence of genes. The very status of environment as a contributing cause to the nature of an organism depends on the existence of a developing organism. Without organisms there may be a physical world, but there are no environments. In like the manner no organisms exist in the abstract without environments, although there may be naked DNA molecules lying in the dust. Organisms are the nexus of external circumstances and DNA molecules that make these physical circumstances into causes of development in the first place. They become causes only at their nexus, and they cannot exist as causes except in their simultaneous action. That is the essence of Oyama’s claim that information comes into existence only in the process of ontogeny. (Oyama, 2000: 16)

There is an “interactionist consensus” (see Oyama, Griffiths, and Grey, 2001; What is Developmental Systems Theory? pg 1-13): the organism and the suite of traits it has is due to the interaction of genetic/environmental/epigenetic etc. resources at every stage of development. Therefore, for organismal development to be successful, it always requires the interaction of genes, environment, epigenetic processes, and interactions between everything that is used to ‘construct’ the organism and the traits it has. Thus “it makes no sense to ask if a particular trait is genetic or environmental in origin. Understanding how a trait develops is not a matter of finding out whether a particular gene or a particular environment causes the trait; rather, it is a matter of understanding how the various resources available in the production of the trait interact over time” (Kaplan, 2006).

Lastly, I will shortly comment on Sesardic’s (2005: chapter 2) critiques on developmental systems theorists and their critique of heritability and the concept of interactionism. Sesardic argues in the chapter that interaction between genes and environment, nature and nurture, does not undermine heritability estimates (the nature and nurture partition). Philosopher of science Helen Longino argues in her book Studying Human Behavior (2013):

By framing the debate in terms of nature versus nurture and as though one of these must be correct, Sesardic is committed to both downplaying the possible contributions of environmentally oriented research and to relying on a highly dubious (at any rate, nonmethodological) empirical claim.

In sum, the “interactionist fallacy” (coined by Gottfredson) is not a ‘fallacy’ (error in reasoning) at all. For, as Oyama writes in Evolution’s Eye: A Systems View of the Biology-Culture DivideA not uncommon reaction to DST is, ‘‘That’s completely crazy, and besides, I already knew it” (pg 195). This is exactly what Gottfredson (2009) states, that she “already knew” that there is an interaction between nature and nurture; but she goes on to deny arguments from Oyama, Griffiths, Stotz, Moore, and others on the uselessness of heritability estimates along with the claim that nature and nurture cannot be neatly partitioned into percentages as they are constantly interacting. Causal parity between genes and other developmental resources, too, upends the claim that heritability estimates for any trait make sense (not least for how heritability estimates are gleaned for humans—mostly twin, family, and adoption studies). Developmental denialism—what Gottfredson and others often engage in—runs rampant in the “behavioral genetic” sphere; and Oyama, Griffiths, Stotz, and others show how we should not deny development and we should discard with these estimates for human traits.

Heritability estimates imply that there is a “nature vs nurture” when it is “nature and nurture” which are constantly interacting—and, due to this, we should discard with these estimates due to the interaction of numerous developmental resources; it does not make sense to partition an interacting, self-organizing developmental system. Claims from behavior geneticists—that genes and environment can be separated—are clearly false.

Five Years Away Is Always Five Years Away

1300 words

Five years away is always five years away. When one makes such a claim, they can always fall back on the “just wait five more years!” canard. Charles Murray is one who makes such claims. In an interview with the editor of Skeptic Magazine, Murray stated to Frank Miele:

I have confidence that in five years from now, and thereafter, this book will be seen as a major accomplishment.

This interview was in 1996 (after the release of the soft cover edition of The Bell Curve), and so “five years” would be 2001. But “predictions” such as this from HBDers (that the next big thing for their ideology, for example) is only X years away happens a lot. I’ve seen many HBDers make claims that only in 5 to 10 years the evidence for their position will come out. Such claims seem strangely religious to me. There is a reason for that. (See Conley and Domingue, 2016 for a molecular genetic refutation of The Bell Curve. While Murray’s prediction failed, 22 years after The Bell Curve’s publication, the claims of Murray and Herrnstein were refuted.)

Numerous people throughout history have made predictions regarding the date of Christ’s return. Some have used calculations to ascertain the date of Christ’s return, from the Bible. We can just take a look at the Wikipedia page for predictions and claims for the second coming of Christ where there are many (obviously failed) predictions of His return.

Take John Wesley’s claim that Revelations 12:14 referred to the day that Christ should come. Or one of Charles Taze Russell’s (the first president of the Watch Tower Society of Jehova’s Witnesses) claim that Jesus would return in 1874 and be ruling invisibly from heaven.

Russell’s beliefs began with Adventist teachings. While Russell, at first, did not take to the claim that Christ’s return could be predicted, that changed when he met Adventist author Nelson Barbour. The Adventists taught that the End Times began in 1799, Christ returned invisibly in 1874 with a physical return in 1878. (When this did not come to pass, many followers left Barbour and Russell states that Barbour did not get the event wrong, he just got the fate wrong.) So all Christians that died before 1874 would be resurrected, and Armageddon would begin in 1914. Since WWI began in 1914, Russell took that as evidence that his prediction was coming to pass. So Russell sold his clothing stores, worth millions of dollars today, and began writing and preaching about Christ’s imminent refuted. This doesn’t need to be said, but the predictions obviously failed.

So the date of 1914 for Armageddon (when Christ is supposed to return), was come to by Russell from studying the Bible and the great pyramids:

A key component to the calculation was derived from the book of Daniel, Chapter 4. The book refers to “seven times“. He interpreted each “time” as equal to 360 days, giving a total of 2,520 days. He further interpreted this as representing exactly 2,520 years, measured from the starting date of 607 BCE. This resulted in the year 1914-OCT being the target date for the Millennium.

Here is the prediction in Russell’s words “…we consider it an established truth that the final end of the kingdoms of this world, and the full establishment of the Kingdom of God, will be accomplished by the end of A.D. 1914” (1889). When 1914 came and went (sans the beginning of WWI which he took to be a sign of the, End Times), Russell changed his view.

Now, we can liken the Russell situation to Murray. Murray claimed that in 5 years after his book’s publication, that the “book would be seen as a major accomplishment.” Murray also made a similar claim back in 2016. Someone wrote to evolutionary biologist Joseph Graves about a talk Murray gave; he was offered an opportunity to debate Graves about his claims. Graves stated (my emphasis):

After his talk I offered him an opportunity to debate me on his claims at/in any venue of his choosing. He refused again, stating he would agree after another five years. The five years are in the hope of the appearance of better genomic studies to buttress his claims. In my talk I pointed out the utter weakness of the current genomic studies of intelligence and any attempt to associate racial differences in measured intelligence to genomic variants.

(Do note that this was back in April of 2016, about one year before I changed my hereditarian views to that of DST. I emailed Murray about this, he responded to me, and gave me permission to post his reply which you can read at the above link.)

Emil Kirkegaard stated on Twitter:

Do you wanna bet that future genomics studies will vindicate us? Ashkenazim intelligence is higher for mostly genetic reasons. Probably someone will publish mixed-ethnic GWAS for EA/IQ within a few years

Notice, though “within a few years” is vague; though I would take that to be, as Kirkegaard states next, three years. Kirkegaard was much more specific for PGS (polygenic scores) and Ashkenazi Jews, stating that “causal variant polygenic scores will show alignment with phenotypic gaps for IQ eg in 3 years time.” I’ll remember this; January 6th, 2022. (Though it was just an “example given”, this is a good example of a prediction from an HBDer.) Nevermind the problems with PGS/GWA studies (Richardson, 2017; Janssens and Joyner, 2019; Richardson and Jones, 2019).

I can see a prediction being made, it not coming to pass, and, just like Russel, one stating “No!! X, Y, and Z happened so that invalidated the prediction! The new one is X time away!” Being vague about timetables about as-of-yet-to-occur events it dishonest; stick to the claim, and if it does not occur….stop holding the view, just as Russel did. However, people like Murray won’t change their views; they’re too entrenched in this. Most may know that I over two years ago I changed my views on hereditarianism (which “is the doctrine or school of thought that heredity plays a significant role in determining human nature and character traits, such as intelligence and personality“) due to two books: DNA Is Not Destiny: The Remarkable, Completely Misunderstood Relationship between You and Your Genes and Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. But I may just be a special case here.

Genes, Brains, and Human Potential then led me to the work of Jablonka and Lamb, Denis Noble, David Moore, Robert Lickliter, and others—the developmental systems theorists. DST is completely at-ends with the main “field” of “HBD”: behavioral genetics. See Griffiths and Tabery (2013) for why teasing apart genes and environment—nature and nurture—is problematic.

In any case, five years away is always five years away, especially with HBDers. That magic evidence is always “right around the corner”, despite the fact that none ever comes. I know that some HBDers will probably clamor that I’m wrong and that Murray or another “HBDer” has made a successful prediction and not immediately change the date of said prediction. But, just like Charles Taze Russell, when the prediction does not come to pass, just make something up about how and why the prediction didn’t come to pass and everything should be fine.

I think Charles Murray should change his name to Charles Taze Russel, since he pushed back the date of the prediction so many times. Though, to Russel’s credit, he did eventually recant on his views. I would find it hard to believe that Murray would; he’s too deep in this game and his career writing books and being an AEI pundit is on the line.

So I strongly doubt that Murray would ever come outright and say “I was wrong.” Too much money is on the line for him. (Note that Murray has a new book releasing in January titled Human Diversity: Gender, Race, Class, and Genes and you know that I will give a scathing review of it, since I already know Murray’s MO.) It’s ironic to me: Most HBDers are pretty religious in their convictions and can and will explain away data that doesn’t line up with their beliefs, just like a theist.

Men Are Stronger Than Women

1200 words

The claim that “Men are stronger than women” does not need to be said—it is obvious through observation that men are stronger than women. To my (non-)surprise, I saw someone on Twitter state:

“I keep hearing that the sex basis of patriarchy is inevitable because men are (on average) stronger. Notwithstanding that part of this literally results from women in all stages of life being denied access to and discourage from physical activity, there’s other stuff to note.”

To which I replied:

“I don’t follow – are you claiming that if women were encouraged to be physically active that women (the population) can be anywhere *near* men’s (the population) strength level?”

I then got told to “Fuck off,” because I’m a “racist” (due to the handle I use and my views on the reality of race). In any case, while it is true that part of this difference does, in part, stem from cultural differences (think of women wanting the “toned” look and not wanting to get “big and bulky”—as if it happens overnight) and not wanting to lift heavy weights because they think they will become cartoonish.

Here’s the thing though: Men have about 61 percent more muscle mass than women (which is attributed to higher levels of testosterone); most of the muscle mass difference is allocated to the upper body—men have about 75 percent more arm muscle mass than women which accounts for 90 percent greater upper body strength in men. Men also have about 50 percent more muscle mass than women, while this higher percentage of muscle mass is then related to men’s 65 percent greater lower body strength (see references in Lassek and Gaulin, 2009: 322).

Men have around 24 pounds of skeletal muscle mass compared to women, though in this study, women were about 40 percent weaker in the upper body and 33 percent weaker in the lower body (Janssen et al, 2000). Miller et al (1993) found that women had a 45 percent smaller cross-section area in the brachii, 45 in the elbow flexion, 30 percent in the vastus lateralis, and 25 percent smaller CSA in the knee extensors, as I wrote in Muscular Strength by Gender and Race, where I concluded:

The cause for less upper-body strength in women is due the distribution of women’s lean tissue being smaller.

Men have larger fibers, which in my opinion is a large part of the reason for men’s strength advantage over women. Now, even if women were “discouraged” from physical activity, this would be a problem for their bone density. Our bones are porous, and so, by doing a lot of activity, we can strengthen our bones (see e.g., Fausto-Sterling, 2005). Bishop, Cureton, and Collins (1987) show that the sex difference in strength in close-to-equally-trained men and women “is almost entirely accounted for by the difference in muscle size.” Which lends credence to my claim I made above.

Lindle et al (1997) conclude that:

… the results of this study indicate that Con strength levels begin to decline in the fourth rather than in the fifth decade, as was previously reported. Contrary to previous reports, there is no preservation of Ecc compared with Con strength in men or women with advancing age. Nevertheless, the decline in Ecc strength with age appears to start later in women than in men and later than Con strength did in both sexes. In a small subgroup of subjects, there appears to be a greater ability to store and utilize elastic energy in older women. This finding needs to be confirmed by using a larger sample size. Muscle quality declines with age in both men and women when Con peak torque is used, but declines only in men when Ecc peak torque is used. [“Con” and “Ecc” strength refer to concentric and eccentric actions]

Women are shorter than men and have less fat-free muscle mass than men. Women also have a weaker grip (even when matched for height and weight, men had higher levels of lean mass compared to women (92 and 79 percent respectively; Nieves et al, 2009). So men had greater bone mineral density (BMD) and bone mineral content (BMC) compared to women. Now do some quick thinking—do you think that one with weaker bones could be stronger than someone with stronger bones? If person A had higher levels of BMC and BMD compared to person B, who do you think would be stronger and have the ability to do whatever strength test the best—the one with the weaker or stronger muscles? Quite obviously, the stronger one’s bones are the more weight they can bare on them. So if one has weak bones (low BMC/BMD) and they put a heavy load on their back, while they’re doing the lift their bones could snap.

Alswat (2017) reviewed the literature on bone density between men and women and found that men had higher BMD in the hip and higher BMC in the lower spine. Women also had bone fractures earlier than men. Some of this is no doubt cultural, as explained above. However, even if we had a boy and a girl locked in a room for their whole lives and they did the same exact things, ate the same food, and lifted the same weights, I would bet my freedom that there still would be a large difference between the two, skewing where we know it would skew. Women are more likely to suffer from osteoporosis than are men (Sözen, Özışık, and Başaran 2016).

So if women have weaker bones compared to men, then how could they possibly be stronger? Even if men and women had the same kind of physical activity down to the tee, could you imagine women being stronger than men? I couldn’t—but that’s because I have more than a basic understanding of anatomy and physiology and what that means for differences in strength—or running—between men and women.

I don’t doubt that there are cultural reasons that account for the large differences in strength between men and women—I do doubt, though, that the gap can be meaningfully closed. Yes, biology interacts with culture. So the developmental variables that coalesce to make men “Men” and those that coalesce to make women “Women” converge in creating the stark differences in phenotype between the sexes which then explains how the sex differences between the sexes manifest itself.

Differences in bone strength between men and women, along with distribution of lean tissue, differences in lean mass, and differences in muscle size explain the disparity in muscular strength between men and women. You can even imagine a man and woman of similar height and weight and they would, of course, look different. This is due to differences in hormones—the two main players being testosterone and estrogen (see Lang, 2011).

So yes, part of the difference in strength between men and women are rooted in culture and how we view women who strength train (way more women should strength train, as a matter of fact), though I find it hard to believe that even if the “cultural stigma” of the women who lifts heavy weights at the gym disappeared overnight, that women would be stronger than men. Differences in strength exist between men and women and this difference exists due to the complex relationship between biology and culture—nature and nurture (which cannot be disentangled).

DNA—Blueprint and Fortune Teller?

2500 words

What would you think if you heard about a new fortune-telling device that is touted to predict psychological traits like depression, schizophrenia and school achievement? What’s more, it can tell your fortune from the moment of your birth, it is completely reliable and unbiased — and it only costs £100.

This might sound like yet another pop-psychology claim about gimmicks that will change your life, but this one is in fact based on the best science of our times. The fortune teller is DNA. The ability of DNA to understand who we are, and predict who we will become has emerged in the last three years, thanks to the rise of personal genomics. We will see how the DNA revolution has made DNA personal by giving us the power to predict our psychological strengths and weaknesses from birth. This is a game-changer as it has far-reaching implications for psychology, for society and for each and every one of us.

This DNA fortune teller is the culmination of a century of genetic research investigating what makes us who we are. When psychology emerged as a science in the early twentieth century, it focused on environmental causes of behavior. Environmentalism — the view that we are what we learn — dominated psychology for decades. From Freud onwards, the family environment, or nurture, was assumed to be the key factor in determining who we are. (Plomin, 2018: 6, my emphasis)

The main premise of Plomin’s 2018 book Blueprint is that DNA is a fortune teller while personal genomics is a fortune-telling device. The fortune-telling device Plomin most discusses in the book is polygenic scores (PGS). PGSs are gleaned from GWA studies; SNP genotypes are then added up with scores of 0, 1, and 2. Then, the individual gets their PGS for trait T. Plomin’s claim—that DNA is a fortune teller—though, falls since DNA is not a blueprint—which is where the claim that “DNA is a fortune teller” is derived.

It’s funny that Plomin calls the measure “unbiased”, (he is talking about DNA, which is in effect “unbiased”), but PGS are anything BUT unbiased. For example, most GWAS/PGS are derived from European populations. But, for example, there are “biases and inaccuracies of polygenic risk scores (PRS) when predicting disease risk in individuals from populations other than those used in their derivation” (De La Vega and Bustamante, 2018). (PRSs are derived from statistical gene associations using GWAS; Janssens and Joyner, 2019.) Europeans make up more than 80 percent of GWAS studies. This is why, due to the large amount of GWASs on European populations, that “prediction accuracy [is] reduced by approximately 2- to 5-fold in East Asian and African American populations, respectively” (Martin et al, 2018). See for example Figure 1 from Martin et al (2018):

gwass

With the huge number of GWAS studies done on European populations, these scores cannot be used on non-European populations for ‘prediction’—even disregarding the other problems with PGS/GWAS.

By studying genetically informative cases like twins and adoptees, behavioural geneticists discovered some of the biggest findings in psychology because, for the first time, nature and nurture could be disentangled.

[…]

… DNA differences inherited from our parents at the moment of conception are the consistent, lifelong source of psychological individuality, the blueprint that makes us who we are. A blueprint is a plan. … A blueprint isn’t all that matters but it matters more than everything else put together in terms of the stable psychological traits that make us who we are. (Plomin, 2018: 6-8, my emphasis)

Nevermind the slew of problems with twin and adoption studies (Joseph, 2014; Joseph et al, 2015; Richardson, 2017a). I also refuted the notion that “A blueprint is a plan” last year, quoting numerous developmental systems theorists. The main thrust of Plomin’s book—that DNA is a blueprint and therefore can be seen as a fortune teller using the fortune-telling device to tell the fortunes of the people’s whose DNA are analyzed—is false, as DNA does not work how it does in Plomin’s mind.

These big findings were based on twin and adoption studies that indirectly assessed genetic impact. Twenty years ago the DNA revolution began with the sequencing of the human genome, which identified each of the 3 billion steps in the double helix of DNA. We are the same as every other human being for more than 99 percent of these DNA steps, which is the blueprint for human nature. The less than 1 per cent of difference of these DNA steps that differ between us is what makes us who we are as individuals — our mental illnesses, our personalities and our mental abilities. These inherited DNA differences are the blueprint for our individuality …

[DNA predictors] are unique in psychology because they do not change during our lives. This means that they can foretell our futures from our birth.

[…]

The applications and implications of DNA predictors will be controversial. Although we will examine some of these concerns, I am unabashedly a cheerleader for these changes. (Plomin, 2018: 8-10, my emphasis)

This quote further shows Plomin’s “blueprint” for the rest of his book—DNA can “foretell our futures from our birth”—and how it affects his conclusions gleaned from his work that he mostly discusses in his book. Yes, all scientists are biased (as Stephen Jay Gould noted), but Plomin outright claimed to be an unabashed cheerleader for his work. Plomin’s self-admission for being an “unabashed cheerleader”, though, does explain some of the conclusions he makes in Blueprint.

However, the problem with the mantra ‘nature and nurture’ is that it runs the risk of sliding back into the mistaken view that the effects of genes and environment cannot be disentangled.

[…]

Our future is DNA. (Plomin, 2018: 11-12)

The problem with the mantra “nature and nurture” is not that it “runs the risk of sliding back into the mistaken view that the effects of genes and environment cannot be disentangled”—though that is one problem. The problem is how Plomin assumes how DNA works. That DNA can be disentangled from the environment presumes that DNA is environment-independent. But as Moore shows in his book The Dependent Gene—and as Schneider (2007) shows—“the very concept of a gene requires the environment“. Moore notes that “The common belief that genes contain context-independent “information”—and so are analogous to “blueprints” or “recipes”—is simply false” (quoted in Schneider, 2007). Moore showed in The Dependent Gene that twin studies are flawed, as have numerous other authors.

Lewkowicz (2012) argues that “genes are embedded within organisms which, in turn, are embedded in external environments. As a result, even though genes are a critical part of developmental systems, they are only one part of such systems where interactions occur at all levels of organization during both ontogeny and phylogeny.” Plomin—although he does not explicitly state it—is a genetic reductionist. This type of thinking can be traced back, most popularly, to Richard Dawkins’ 1976 book The Selfish Gene. The genetic reductionists can, and do, make the claim that organisms can be reduced to their genes, while developmental systems theorists claim that holism, and not reductionism, better explains organismal development.

The main thrust of Plomin’s Blueprint rests on (1) GWA studies and (2) PGSs/PRSs derived from the GWA studies. Ken Richardson (2017b) has shown that “some cryptic but functionally irrelevant genetic stratification in human populations, which, quite likely, will covary with social stratification or social class.Richardson’s (2017b) argument is simple: Societies are genetically stratified; social stratification maintains genetic stratification; social stratification creates—and maintains—cognitive differentiation; “cognitive” tests reflect prior social stratification. This “cryptic but functionally irrelevant genetic stratification in human populations” is what GWA studies pick up. Richardson and Jones (2019) extend the argument and argue that spurious correlations can arise from genetic population structure that GWA studies cannot account for—even though GWA study authors claim that this population stratification is accounted for, social class is defined solely on the basis of SES (socioeconomic status) and therefore, does not capture all of what “social class” itself captures (Richardson, 2002: 298-299).

Plomin also heavily relies on the results of twin and adoption studies—a lot of it being his own work—to attempt to buttress his arguments. However, as Moore and Shenk (2016) show—and as I have summarized in Behavior Genetics and the Fallacy of Nature vs Nurture—heritability estimates for humans are highly flawed since there cannot be a fully controlled environment. Moore and Shenk (2016: 6) write:

Heritability statistics do remain useful in some limited circumstances, including selective breeding programs in which developmental environments can be strictly controlled. But in environments that are not controlled, these statistics do not tell us much. In light of this, numerous theorists have concluded that ‘the term “heritability,” which carries a strong conviction or connotation of something “[in]heritable” in the everyday sense, is no longer suitable for use in human genetics, and its use should be discontinued.’ 31 Reviewing the evidence, we come to the same conclusion.

Heritability estimates assume that nature (genes) can be separated from nurture (environment), but “the very concept of a gene requires the environment” (Schneider, 2007) so it seems that attempting to partition genetic and environmental causation of any trait T is a fool’s—reductionist—errand. If the concept of gene depends on and requires the environment, then how does it make any sense to attempt to partition one from the other if they need each other?

Let’s face it: Plomin, in this book Blueprint is speaking like a biological reductionist, though he may deny the claim. The claims from those who push PRS and how it can be used for precision medicine are unfounded, as there are numerous problems with the concept. Precision medicine and personalized medicine are similar concepts, though Joyner and Paneth (2015) are skeptical of its use and have seven questions for personalized medicine. Furthermore, Joyner, Boros and Fink (2018) argue that “redundant and degenerate mechanisms operating at the physiological level limit both the general utility of this assumption and the specific utility of the precision medicine narrative.Joyner (2015: 5) also argues that “Neo-Darwinism has failed clinical medicine. By adopting a broader perspective, systems biology does not have to.

Janssens and Joyner (2019) write that “Most [SNP] hits have no demonstrated mechanistic linkage to the biological property of interest.” Researchers can show correlations between disease phenotypes and genes, but they cannot show causation—which would be mechanistic relations between the proposed genes and the disease phenotype. Though, as Kampourakis (2017: 19), genes do not cause diseases on their own, they only contribute to its variation.

Edit: Take also this quote from Plomin and Stumm (2018) (quoted by Turkheimer):

GPS are unique predictors in the behavioural sciences. They are an exception to the rule that correlations do not imply causation in the sense that there can be no backward causation when GPS are correlated with traits. That is, nothing in our brains, behaviour or environment changes inherited differences in DNA sequence. A related advantage of GPS as predictors is that they are exceptionally stable throughout the life span because they index inherited differences in DNA sequence. Although mutations can accrue in the cells used to obtain DNA, like any cells in the body these mutations would not be expected to change systematically the thousands of inherited SNPs that contribute to a GPS.

Turkheimer goes on to say that this (false) assumption by Plomin and Stumm (2018) assumes that there is no top-down causation—i.e., that phenotypes don’t cause genes, or there is no causation from the top to the bottom. (See the special issue of Interface Focus for a slew of articles on top-down causation.) Downward causation exists in biological systems (Noble, 2012, 2017), as does top-down. The very claim that “nothing in our brains, behaviour or environment changes inherited differences in DNA sequence” is ridiculous! This is something that, of course, Plomin did not discuss in Blueprint. But in a book that, supposedly, shows “how DNA makes us who we are”, why  not discuss epigenetics? Plomin is confused, because DNA methylation impacts behavior and behavior impacts DNA methylation (Lerner and Overton, 2017: 114). Lerner and Overtone (2017: 145) write that:

… it should no longer be possible for any scientist to undertake the procedure of splitting of nature and nurture and, through reductionist procedures, come to conclusions that the one or the other plays a more important role in behavior and development.

Plomin’s reductionist takes, therefore again, fail. Plomin’s “reluctance” to discuss “tangential topics” to “inherited DNA differences” included epigenetics (Plomin, 2018: 12). But it seems that his “reluctance” to discuss epigenetics was a downfall in his book as epigenetic mechanisms can and do make a difference to “inherited DNA differences” (see for example, Baedke, 2018, Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics and Meloni, 2019, Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics see also Meloni, 2018). The genome can and does “react” to what occurs to the organism in the environment, so it is false that “nothing in our brains, behaviour or environment changes inherited differences in DNA sequence” (Plomin and Stumm, 2018), since our behavior and actions can and do methylate our DNA (Meloni, 2014) which falsifies Plomin’s claim and which is why he should have discussed epigenetics in BlueprintEnd Edit

Conclusion

So the main premise of Plomin’s Blueprint is his two claims: (1) that DNA is a fortune teller and (2) that personal genomics is a fortune-telling device. He draws these big claims from PGS/PRS studies. However, over 80 percent of GWA studies have been done on European populations. And, knowing that we cannot use these datasets on other, non-European datasets, greatly hampers the uses of PGS/PRS in other populations—although the PGS/PRS are not that useful in and of itself for European populations. Plomin’s whole book is a reductionist screed—“Sure, other factors matter, but DNA matters more” is one of his main claims. Though, a priori, since there is no privileged level of causation, one cannot privilege DNA over any other developmental variables (Noble, 2012). To understand disease, we must understand the whole system and how when one part of the system becomes dysfunctional how it affects other parts of the system and how it runs. The PGS/PRS hunts are reductionist in nature, and the only answer to these reductionist paradigms are new paradigms from systems biology—one of holism.

Plomin’s assertions in his book are gleaned from highly confounded GWA studies. Plomin also assumes that we can disentangle nature and nurture—like all reductionists. Nature and nurture interact—without genes, there would be an environment, but without an environment, there would be no genes as gene expression is predicated on the environment and what occurs in it. So Plomin’s reductionist claim that “Our future is DNA” is false—our future is studying the interactive developmental system, not reducing it to a sum of its parts. Holistic biology—systems biology—beats reductionist biology—the Neo-Darwinian Modern Synthesis.

DNA is not a blueprint nor is it a fortune teller and personal genomics is not a fortune-telling device. The claim that DNA is a blueprint/fortune teller and personal genomics is a fortune-telling device come from Plomin and are derived from highly flawed GWA studies and, further, PGS/PRS. Therefore Plomin’s claim that DNA is a blueprint/fortune teller and personal genomics is a fortune-telling device are false.

(Also read Erick Turkheimer’s 2019 review of Plomin’s book The Social Science Blues, along with Steve Pitteli’s review Biogenetic Overreach for an overview and critiques of Plomin’s ideas. And read Ken Richardson’s article It’s the End of the Gene As We Know It for a critique of the concept of the gene.)

Prediction, Accommodation, and Explanation in Science: Are Just-so Stories Scientific?

2300 words

One debate in the philosophy of science is whether or not a scientific hypothesis should make testable predictions or merely explain only what it purports to explain. Should a scientific hypothesis H predict previously unknown facts of the matter or only explain an observation? Take, for example, evolutionary psychology (EP). Any EP hypothesis H can speculate on the so-called causes that led a trait to fixate in a biological population of organisms, but the claim that they can do more than that—that is, that they can generate successful predictions of previously unknown facts not used in the construction of the hypothesis—but that’s all they can do. The claim, therefore, that EP hypotheses are anything but just-so stories, is false.

Prediction and novel facts

For example, Einstein’s theory of general relativity predicted the bending of light, which was a novel prediction for the hypothesis (see pg 177-180 for predictions generated from Einstein’s theory). Fresnel’s wave theory of light predicted different infraction fringes to the prediction of the white spot—a spot which appears in a circular object’s shadow due to Fresnel diffraction (see Worrall, 1989). So Fresnel’s theory explained the diffraction and the diffraction then generated testable—and successful—novel predictions (see Magnus and Douglas, 2013). There is an example of succeful novel prediction. Ad hoc hypotheses are produced “for this” explanation—so the only evidence for the hypothesis is, for example, the existence of trait T. EP hypotheses attempt to explain the fixation of any trait T in humans, but all EP hypotheses do is explain—they generate no testable, novel predictions of previously unknown facts.

A defining feature of science and what it purports to do is to predict facts-of-the-matter which are yet to be known. John Beerbower (2016) explains this well in his book Limits of Science? (emphasis mine):

At this point, it seems appropriate to address explicitly one debate in the philosophy of science—that is, whether science can, or should try to, do more than predict consequences. One view that held considerable influence during the first half of the twentieth venture is called the predictivist thesis: that the purpose of science is to enable accurate predictions and that, in fact, science cannot actually achieve more than that. The test of an explanatory theory, therefore, is its success at prediction, at forecasting. This view need not be limited to actual predictions of future, yet to happen events; it can accommodate theories that are able to generate results that have already been observed or, if not observed, have already occurred. Of course, in such cases, care must be taken that the theory has not simply been retrofitted to the observations that have already been made—it must have some reach beyond the data used to construct the theory.

That a theory or hypothesis explains observations isn’t enough—it must generate successful predictions of novel facts. If it does not generate any novel facts-of-the-matter, then of what use is the hypothesis if it only weakly justifies the phenomenon in question? So now, what is a novel fact?

A novel fact is a fact that’s generated by hypothesis H that’s not used in the construction of the hypothesis. For example, Musgrave (1988) writes:

All of this depends, of course, on our being able to make good the intuitive distinction between prediction and novel prediction. Several competing accounts of when a prediction is a novel prediction for a theory have been produced. The one I favour, due to Elie Zahar and John Worral says that a predicted fact is a novel fact for a theory if it was not used to construct that theory  — where a fact is used to construct a theory if it figures in the premises from which that theory was deduced.

Mayo (1991: 524; her emphasis) writes that a “novel fact [is] a newly discovered fact—one not known before used in testing.” So a fact is novel when it predicts a fact of the matter not used in the construction of the hypothesis—i.e., a future event. About novel predictions, Musgrave also writes that “It is only novel predictive success that is surprising, where an observed fact is novel for a theory when it was not used to construct it.” So hypothesis H entails evidence E; evidence E is not used in the construction of hypothesis H, therefore E is novel evidence for hypothesis H.

To philosopher of science Imre Lakatos, a progressive research program is one that generates novel facts, whereas a degenerating research program either fails to generate novel facts or the predictions made that were novel continue to be falsified, according to Musgrave in his article on Lakatos. We can put EP in the “degenerating research program, as no EP hypothesis generates any type of novel prediction—the only evidence for the trait is the existence of the trait.

Evolutionary Psychology

The term “just-so stories” comes from Rudyard Kipling Just-so Stories for Little Children. Then Gould and Lewontin used the term for evolutionary hypotheses that can only explain and not predict future as-of-yet-known events. Law (2016) notes that just-so stories offer “little in the way of independent evidence to suggest that it is actually true.Sterelny and Griffiths (1999: 61) state that just-so stories are “… an adaptive scenario, a hypothesis about what a trait’s selective history might have been and hence what its function may be.” Examples of just-so stories covered on this blog include: beards, FOXP2, cartels and Mesoamerican ritual sacrifice, Christian storytelling, just-so storytellers and their pet just-so stories, the slavery hypertension hypothesis, fear of snakes and spiders, and cold winter theorySmith (2016: 278) has a helpful table showing ten different definitions and descriptions of just-so stories:

justso

So the defining criterion for just-so stories is that there must be independent evidence to believe the proposed explanation for the existence of the trait. There must be independent reasons to believe a certain hypothesis, as the defining feature of a scientific hypothesis or theory is whether or not it can predict yet-to-happen events. Though, as Beerbower notes, we have to be careful that we do not retrofit the observations.

One can make an observation. Then they can work backward (what Richardson (2007) elicits is “reverse engineering”) and posit (speculate about) a good-sounding story (just-so storytelling) to explain this observation. Reverse engineering is “a process of figuring out the design of a mechanism on the basis of an analysis of the tasks it performs” (Buller, 2005: 92). Of course, the just-so storyteller can then create a story to explain the fixation of the trait in question. But that’s only (purportedly) the explanation of why the trait came to fixation for us to observe it today. There are no testable predictions of previously unknown facts. So it’s all storytelling—speculation.

The theory of natural selection is then deployed to attempt the explain the fixation of trait T in any population. It is true that a hypothesis is weakly corroborated by the existence of trait T, but what makes it a just-so story is the fact that there are no successful predictions of previously unknown facts,

When it comes to EP, one can say that the hypothesis “makes sense” and it “explains” why trait T still exists and went to fixation. However, the story only “makes sense” because there is no other way for it to be—if the story didn’t “make sense”, then the just-so storyteller wouldn’t be telling the story because it wouldn’t satisfy their aims of “proving” that a trait is an adaptation.

Smith (2016:277-278) notes 7 just-so story triggers:

1) proposing a theory-driven rather than a problem-driven explanation, 2) presenting an explanation for a change without providing a contrast for that change, 3) overlooking the limitations of evidence for distinguishing between alternative explanations (underdetermination), 4) assuming that current utility is the same as historical role, 5) misusing reverse engineering, 6) repurposing just-so stories as hypotheses rather than explanations, and 7) attempting to explain unique events that lack comparative data.

EP is most guilty of (3), (4), (5), (6), and (7). It is guilty of (3) in that it hardly ever posits other explanations for trait T, it’s always “adaptation”, as EP is an adaptationist paradigm. It is guilty of (4) perhaps the most. That trait T still exists and is useful for this today is not evidence that trait T was selected-for its use as we see it today. This then leads to  (5) which is the misuse of reverse engineering. Just-so stories are ad hoc (“for this”) explanations and these types of explanations are ad hoc if there is no independent data for the hypothesis. Of course, it is guilty of (7) in that it attempts to explain, of course, unique events in human evolution. Many problems exist for evolutionary psychology (see for example Samuels, 1998; Lloyd, 1999Prinz, 2006;), but the biggest problem is the ability of any hypothesis to generate testable, novel predictions. Smith (2016: 279) further writes that:

An important weakness in the use of narratives for scientific purposes is that the ending is known before the narrative is constructed. Merton pointed out that a “disarming characteristic” of ex post facto explanations is that they are always consistent with the observations because they are selected to be so.

Bo Winegard, in his defense of just-so storytelling, writes “that inference to the best explanation most accurately describes how science is (and ought to be) practiced. According to this description, scientists forward theories and hypotheses that are coherent, parsimonious, and fruitful.” However, as Smith (2016: 280-281) notes, that a hypothesis is “coherent”, “parsimonious” and “fruitful” (along with 11 more explanatory virtues of IBE, including depth, precision, consilience, and simplicity) is not sufficient to accept IBE—IBE is not a solution to the problems proposed by the just-so story critics as the slew of explanatory virtues do not lend evidence that T was an adaptation and thusly do not lend evidence that hypothesis H is true.

Simon (2018: 5) concludes that “(1) there is much rampant speculation in evolutionary psychology as to the reasons and the origin for certain traits being present in human beings, (2) there is circular reasoning as to a particular trait’s supposed advantage in adaptability in that a trait is chosen and reasoning works backward to subjectively “prove” its adaptive advantage, (3) the original classical theory is untestable, and most importantly, (4) there are serious doubts as to Natural Selection, i.e., selection through adaptive advantage, being the principal engine for evolution.” (1) is true since that’s all EP is—speculation. (2) is true in evolutionary psychologists notice trait T and that, since it survived today, there must be a function it performs for why natural selection “selected” the trait to propagate in species (though selection cannot select-for certain traits). (3) it is untestable in that we have no time machine to go back and watch how trait T evolved (this is where the storytelling narrative comes in: if only we had a good story to tell about the evolution of trait T). And finally, (4) is also true since natural selection is not a mechanism (see Fodor, 2008; Fodor and Piattelli-Palmarini, 2010).

EP exists in an attempt to explain so-called psychological adaptations humans have to the EEA (environment of evolutionary adaptiveness). So one looks at the current phenotype and then looks to the past in an attempt to construct a “story” which shows how a trait came to fixation. There are, furthermore, no hallmarks of adaptation. When one attempts to use selection theory to explain the fixation of trait T, they must wrestle with spandrels. Spandrels are heritable, can increase fitness, and they are selected as well—as the whole organism is selected. This also, of course, falls right back to Fodor’s (2008) argument against natural selection. Fodor (2008: 1) writes that the central claim of EP “is that heritable properties of psychological phenotypes are typically adaptations; which is to say that they are typically explained by their histories of selection.” But if “psychological phenotypes” cannot be selected, then the whole EP paradigm crumbles.

Conclusion

This is why EP is not scientific. It cannot make successful predictions of previously unknown facts not used in the construction of the hypothesis, it can only explain what it purports to explain. The claim, therefore, that EP hypotheses are anything but just-so stories is false. One can create good-sounding narratives for any type of trait. But that they “sound good” to the ear, and are “plausible” are not reasons to believe that the story told is true.

Are all hypotheses just-so stories? No. Since a just-so story is an ad hoc hypothesis and a hypothesis is ad hoc if it cannot be independently verified, then a hypothesis that makes predictions which can be independently verified are not just-so stories. There are hypotheses that generate no predictions, ad hoc hypotheses (where the only evidence to believe H is the existence of trait T), and hypotheses that generate novel predictions. EP is the second of these—the only evidence we have to believe H is true is that trait T exists. Independent evidence is a necessary condition of science—that is, the ability of a hypothesis to predict novel evidence is a necessary condition for science. That no EP hypothesis can generate a successful novel prediction is evidence that all EP hypotheses are just-so stories. So for the criticism to be refuted, one would have to name an EP hypothesis that is not a just-so story—that is, (1) name an EP hypothesis, (2) state the prediction, and then (3) state how the prediction follows from the hypothesis.

To be justified in believing hypothesis H in explaining how trait T became fixated in a population there must be independent evidence for this belief. The hypothesis must generate a novel fact which was previously unknown before the hypothesis was constructed. If the hypothesis cannot generate any predictions, or the predictions it makes are continuously falsified, then the hypothesis is to be rejected. No EP hypothesis can generate successful predictions of novel facts and so, the whole EP enterprise is a degenerative research program. The EP paradigm explains and accommodates, but no EP hypothesis generates independently confirmable evidence for any of its hypotheses. Therefore EP is not a scientific program and just-so stories are not scientific.

Just-so Stories: Cartels and Mesoamerican Ritual Sacrifice

1550 words

Mexican drug cartels kill in some of the most heinous ways I’ve ever seen. I won’t link to them here, but a simple Google search will show you the brutal, heinous ways in which they kill rivals and snitches. Why do they kill like this? I have a simple just-so story to explain it: Mexican drug cartels—and similar groups—kill the way they do because they are descended from Aztecs, Maya, and other similar groups who enacted ritual sacrifices to appease their gods.

For example, Munson et al (2014) write:

Among the most noted examples, Aztec human sacrifice stands out for its ritual violence and bloodshed. Performed in the religious precincts of Tenochitlan, ritual sacrifice was a primary instrument for social integration and political legitimacy that intersected with militaristic and marketplace practices, as well as with beliefs about the cosmological order . Although human sacrifice was arguably less common in ancient Maya society, physical evidence indicates that offerings of infant sacrifices and other rituals involving decapitation were important religious practices during the Classic period .

The Aztecs believed that sacrificial blood-letting appeased their gods who fed on the human blood. They also committed the sacrifices “so that the sun could continue to follow its course” (Garraud and Lefrere, 2014). Their sun god—Uitzilopochtli—was given strength by sacrificial bloodletting, which benfitted the Aztec population “by postponing the end of the world” (Trewby, 2013). The Aztecs also sacrificed children to their rain god Tlaloc (Froese, Gershenson, and Manzanilla, 2014). Further, the Aztec ritual of cutting out still-beating hearts arose from the Maya-Toltec traditions (Ceruti, 2015).

Regarding Aztec sacrifices, Winkelman (2014: 50) writes:

Anthropological efforts to provide a scientific explanation for human sacrifice and cannibalism were initiated by Harner (1970, 1977a, 1977b). Harner pointed out that the emic normalcy of human sacrifice—that it is required by one’s gods and religion—does not alone explain why such beliefs and behaviours were adopted in specific societies. Instead, Harner proposed explanations based upon causal factors found in population pressure. Harner suggested that the magnitude of Aztec human sacrifice and cannibalism was caused by a range of demographic-ecological conditions—protein shortages, population pressure, unfavourable agricultural conditions, seasonal crop failures, the lack of domesticated herbivores, wild game depletion, food scarcity and famine, and environmental circumscription limiting agricultural expansion.

So, along with appeasing and “feeding” their gods, there were sociological reasons for why they committed human sacrifices, and even cannibalism.

When it comes to the Maya (a civilization that independently discovered numerous things while being completely isolated from other civilizations), they had a game called pok-ta-tok—due to the sound the ball made when the players hit it or it fell on the ground. Described in the Popul Vuh (the Ki’iche Maya book that lays out their creation myth), humans and the lords of the Underworld played this game. The Maya Hero Twins Hunahpu and Xbalanque went to the Underworld to do battle against the lords of the Underworld—called Xibalba (see Zaccagnini, 2003: 16-20 for a description of the myth Maya Hero Twins and how it relates to pok-ta-tok and also Myers (2002: 6-13)). See Tokovinine (2002) for more information on pok-ta-tok.

This game was created by the Olmec, a pre-cursor people to the Maya, and later played by the Aztecs. The court was seen as the portal to Xibalba. The Aztec then started playing the game and continued the tradition of murdering the losing team. The rubber ball [1] weighed around ten pounds, and so it must have caused a lot of bruising and head injuries to players who got hit in the head and body with the ball—as they used their forearms and thighs to pass the ball. (See The Brutal and Bloody History of the Mesoamerican Ball Game, Where Sometimes Loss Was Death.)

According to Zaccagnini (2003: 6)The ballgame was executed for many reasons, which include social functions, for recreation or the mediation of conflict for instance, the basis for ritualized ceremony, and for political purposes, such as acting as a forum for the opposing groups to compete for political status (Scarborough 1991:141).Zaccagnini (2003: 7-8) states that the most vied-for participants in the game were captured Maya kings and that they were considered “trophies” of the kings’ people who captured them. Those who were captured had to play the game and they were—essentially—fighting (playing) for their lives. The Maya used the game for a stand-in for war, which is seen in the fact that they played with invading Toltecs in their region (Zaccagnini, 2003: 8).

Death by decapitation occurred to the losers of the game, and, sometimes, skulls of the losing players were used inside of the rubber balls they used to play the game. The Maya word for ball—quiq—literally means “sap” or “blood” which refers to how the rubber ball itself was constructed. Zaccagnini (2003: 11) notes that “The sap can be seen as a metaphoric blood which flows from the tree to give rise to the execution of the ballgame and in this respect, can imply further meaning. The significance of blood in the ballgame, which implies death, is tremendous and this interpretation of the connection of blood and the ball correlated with the notion that the ball is synonymous with the human head is important.” (See both Zaccagnini, (2003) and Tokovinine (2002) for pictures of Maya hieroglyphs which depict winning and losing teams, decapitations, among other things.)

So, the game was won when the ball passed through the hoop which was 20-30 feet in the air, hanging from a wall. These courts, too, were linked to celestial events that occurred (Zaccagnini, 2003). It has been claimed that the ball passing through the hoop was a depiction of the earth passing through the center of the Milky Way.

Avi Loeb notes thatThe Mayan culture collected exquisite astronomical data for over a millennium with the false motivation that such data would help predict its societal future. This notion of astrology prevented the advanced Mayan civilization from developing a correct scientific interpretation of the data and led to primitive rituals such as the sacrifice of humans and acts of war in relation to the motions of the Sun and the planets, particulary Venus, on the sky.” The planets and constellations, of course, were also of importance in the Maya society. Šprajc (2018) notes that “Venus was one of the most important celestial bodies”, while also stating:

Human sacrifices were believed necessary for securing rain, agricultural fertility, and a proper functioning of the universe in general. Since the captives obtained in battles were the most common sacrificial victims, the military campaigns were religiously sanctioned, and the Venus-rain-maize associations became involved in sacrificial symbolism and warfare ritual. These ideas became a significant component of political ideology, fostered by rulers who exploited them to satisfy their personal ambitions and secular goals. In sum, the whole conceptual complex surrounding the planet Venus in Mesoamerica can be understood in the light of both observational facts and the specific socio-political context.

The relationship between the ballgame, Venus, and the fertility of the land in regard to the agricultural cycle and Venus is also noted by Šprajc (2018). The Maya were expert astronomers and constantly watched the skies and interpreted certain things that occurred in the cosmos in the context of their beliefs.

I have just described the ritualistic sacrifices of the Maya. This, then, is linked to my just-so story, which I first espoused on Twitter back in July of 2018:

Then in January of this year, white nationalist Angelo John Gage unironically used my just-so story!:

Needless to say, I found it hilarious that it was used unironically. Of course, since Mexicans and other Mesoamericans are descendants of the Aztec, Maya and other Indian groups native to the area, one can make this story “fit with” what we observe today. Going back to the analysis above of the Maya ballgame pok-ta-tok, the Maya were quite obviously brutal in their decapitations of the losing teams of the game. Since they decapitated the losing players, this could be seen as a sort of cultural transmission of certain actions (though I strongly doubt that that is why cartels and similar groups kill in the way they do—the exposition of the just-so story is just a funny joke to me).

In sum, my just-so story for why Mexican drug cartels and similar groups kill in the way they do is, as Smith (2016: 279) notes “always consistent with the [observation] because [it is] selected to be so.” The reasons why the Aztecs, Maya, and other Mesoamerican groups participated in these ritualistic sacrifices are numerous: appeasing gods, for agricultural fertility, to cannibalism and related things. There were various ecological reasons why the Aztecs may have committed human sacrifice, and it was—of course—linked back to the gods they were trying to appease.

The ballgame they played attests to the layout of their societies and how it made their societies function in the context of their beliefs regarding appeasing their numerous gods. When the Spanish landed at Mesoamerica and made first contact with the Maya, it took them nearly two centuries to defeat them—though the Maya population was already withering away due to climate change and other related factors (I will cover this in a future article). Although the Spanish destroyed many—if not most—Maya codices, we can glean important information of their lifestyle and how and why they played their ballgame which ended in the ritualistic sacrifice of the losing team.

African Neolithic Part 1: Amending Common Misunderstandings

One of the weaknesses, in my opinion, to HBD is the focus on the Paleolithic and modern eras while glossing over the major developments in between. For instance, the links made between Paleolithic Western Europe’s Cromagnon Art and Modern Western Europe’s prowess (note the geographical/genetic discontinuity there for those actually informative on such matters).

Africa, having a worst archaeological record due to ideological histories and modern problems, leaves it rather vulnerable to reliance on outdated sources already discussed before on this blog. This lack of mention however isn’t strict.

Eventually updated material will be presented by a future outline of Neolithic to Middle Ages development in West Africa.

A recent example of an erroneous comparison would be in Heiner Rindermann’s Cogntivie Capitalism, pages 129-130. He makes multiple claims on precolonial African development to explained prolonged investment in magical thinking.

  • Metallurgy not developed independently.
  • No wheel.
  • Dinka did not properly used cattle due to large, uneaten, portions left castrated.
  • No domesticated animals of indigenous origin despite Europeans animals being just as dangerous, contra Diamond (lists African dogs, cats, antelope, gazelle, and Zebras as potential specimens, mentions European Foxes as an example of a “dangerous” animal to be recently domesticated along with African Antelopes in the Ukraine.
  • A late, diffused, Neolithic Revolution 7000 years following that of the Middle East.
  • Less complex Middle Age Structure.
  • Less complex Cave structures.

Now, technically, much of this falls outside of what would be considered “neolithic”, even in the case of Africa. However, understanding the context of Neolithic development in Africa provides context to each of these points and periods of time by virtue of causality. Thus, they will be responded by archaeological sequence.

Dog domestication, Foxes, and human interaction.

The domestication of dogs occurred when Eurasian Hunter-Gathers intensified megafauna hunting, attracting less aggressive wild dogs to tame around 23k-25k ago. Rindermann’s mention of the fox experiment replicates this idea. Domestication isn’t a matter of breaking the most difficult of animals, it’s using the easiest ones to your advantage.

In this same scope, this needs to be compared to Africa’s case. In regards to behavior they are rarely solitary, so attracting lone individuals is already impractical. The species likewise developed under a different level of competition.

They were probably under as much competition from these predators as the ancestral African wild dogs were under from the guild of super predators on their continent.

What was different, though, is the ancestral wolves never evolved in an enviroment which scavenging from various human species was a constant threat, so they could develop behaviors towards humans that were not always characterized by extreme caution and fear.

Europe in particular shows that carnivore density was lower, and thus advantageous to hominids.

Consequently, the first Homo populations that arrived in Europe at the end of the late Early Pleistocene found mammal communities consisting of a low number of prey species, which accounted for a moderate herbivore biomass, as well as a diverse but not very abundant carnivore guild. This relatively low carnivoran density implies that the hominin-carnivore encounter rate was lower in the European ecosystems than in the coeval East African environments, suggesting that an opportunistic omnivorous hominin would have benefited from a reduced interference from the carnivore guild.

This would be a pattern based off of megafaunal extinction data.

The first hints of abnormal rates of megafaunal loss appear earlier, in the Early Pleistocene in Africa around 1 Mya, where there was a pronounced reduction in African proboscidean diversity (11) and the loss of several carnivore lineages, including sabertooth cats (34), which continued to flourish on other continents. Their extirpation in Africa is likely related to Homo erectus evolution into the carnivore niche space (3435), with increased use of fire and an increased component of meat in human diets, possibly associated with the metabolic demands of expanding brain size (36). Although remarkable, these early megafauna extinctions were moderate in strength and speed relative to later extinctions experienced on all other continents and islands, probably because of a longer history in Africa and southern Eurasia of gradual hominid coevolution with other animals.

This fundamental difference in adaptation to human presence and subsequent response is obviously a major detail in in-situ animal domestication.

Another example would be the failure of even colonialists to tame the Zebra.

Of course, this alone may not be good enough. One can nonetheless cite the tame-able Belgian Congo forest Elephant, or Eland. Therefore we can just ignore regurgitating Diamond.

This will just lead me to my next point. That is, what’s the pay-off?

Pastoralism and Utility

A decent test to understand what fauna in Africa can be utilized would the “experiments” of Ancient Egyptians, who are seen as the Eurasian “exception” to African civilization. Hyenas, and antelope from what I’ve, were kept under custody but overtime didn’t resulted in selected traits. The only domesticated animal in this region would be Donkeys, closer relatives to Zebras.

This brings to light another perspective to the Russian Fox experiments, that is, why have pet foxes not been a trend for Eurasians prior to the 20th century? It can be assumed then that attempts of animals domestication simply where not worth investment in the wake of already domesticated animals, even if one grew up in a society/genetic culture at this time that harnessed the skills.

For instance, a slow herd of Eland can be huddled and domesticated but will it pay off compared to the gains from investing into adapting diffused animals into a new environment? (This will be expanded upon as well into the future).

Elephants are nice for large colonial projects, but unique herding discouraging local diseases that also disrupts population density again effects the utility of large bodied animals. Investing in agriculture and iron proved more successful.

Cats actually domesticated themselves and lacked any real utility prior to feasting on urban pests. In Africa, with highly mobile groups as will be explained later, investment in cats weren’t going to change much. Wild Guineafowl, however, were useful to tame in West Africa and use to eat insects.

As can be seen here, Pastoralism is roughly as old in Africa diffused from the Middle East as compared to Europe. Both lacked independently raised species prior to it and making few innovations in regard to in situ beasts beyond the foundation. (Advancement in plant management preceding developed agriculture, a sort of skill that would parallel dog domestication for husbandry, will be discussed in a future article).

And given how advanced Mesoamericans became without draft animals, as mentioned before, their importance seems to be overplayed from a pure “indigenous” perspective. The role in invention itself ought be questioned as well in what we can actually infer.

Borrowed, so what?

In a thought experiment, lets consider some key details in diffusion. The invention of Animal Domestication or Metallurgy is by no means something to be glossed over as an independent invention. Over-fixating on this however in turn glosses over some other details on successful diffusion.

Why would a presumably lower apt population adopt a cognitively demanding skill, reorient it’s way of society around it, without attributing this change to an internal change of character compared to before? Living in a new type of economy system as a trend it undoubtedly bound to result in a new population in regards to using cognition to exploit resources. This would require contributions to their own to the process.

This applies regards to African Domesticated breeds,

Viewing domestication as an invention also produces a profound lack of curiosity about evolutionary changes in domestic species after their documented first appearances. [……] African domesticates, whether or not from foreign ancestors, have adapted to disease and forage challenges throughout their ranges, reflecting local selective pressures under human management. Adaptations include dwarfing and an associated increase in fecundity, tick resistance, and resistance to the most deleterious effects of several mortal infectious diseases. While the genetics of these traits are not yet fully explored, they reflect the animal side of the close co-evolution between humans and domestic animals in Africa. To fixate upon whether or not cattle were independently domesticated from wild African ancestors, or to dismiss chickens’ swift spread through diverse African environments because they were of Asian origin, ignores the more relevant question of how domestic species adapted to the demands of African environments, and how African people integrated them into their lives.

The same can be said for Metallurgy,

We do not yet know
whether the seventh/sixth century Phoenician smelt-
ing furnace from Toscanos, Spain (illustrated by
Niemeyer in MA, p.87, Figure 3) is typical, but it is
clearly very different from the oldest known iron smelt-
ing technology in sub-Saharan Africa. Almost all pub-
lished iron smelting furnaces of the first millennium cal
BC from Rwanda/Burundi, Buhaya, Nigeria, Niger,
Cameroon, Congo, Central African Republic and Ga-
bon are slag-pit furnaces, which are so far unknown
from this or earlier periods in the Middle East or North
Africa. Early Phoenician tuyères, which have square
profiles enclosing two parallel (early) or converging
(later) narrow bores are also quite unlike those de-
scribed for early sites in sub-Saharan Africa, which are
cylindrical with a single and larger bore.
African ironworkers adapted bloomery furnaces
to an extraordinary range of iron ores, some of which
cannot be used by modern blast furnaces. In both
northern South Africa (Killick & Miller 2014)andin
the Pare mountains of northern Tanzania (Louise Iles
pers. comm., 2013) magnetite-ilmenite ores contain-
ing up to 25 per cent TiO2(by mass) were smelted.
The upper limit for TiO2in iron ore for modern
blast furnaces is only 2 per cent by mass (McGan-
non 1971). High-titanium iron ores can be smelted
in bloomery furnaces because these operate at lower
temperatures and have less-reducing furnace atmo-
spheres than blast furnaces. In the blast furnace tita-
nium oxide is partially reduced and makes the slag
viscous and hard to drain, but in bloomery furnaces
it is not reduced and combines with iron and silicon
oxide to make a fluid slag (Killick & Miller 2014). Blast
furnace operators also avoid ores containing more
than a few tenths of a percent of phosphorus or ar-
senic, because when these elements are dissolved in
the molten iron, they segregate to grain boundaries on
crystallization, making the solid iron brittle on impact.
McIntosh goes over how the transition from Neolithic to Iron Age transformed African stratification and launched Middle Ages indigenous progress. This will undoubtedly be retouched in future works.
With all of this said, what gave the impression of stagnation?
Inefficient Dinka?
Rindermann cites Baker secondarily on the nature of Dinka cattle castration, that up to a third are castrated just to “look good and fat”. Multiple sources complicates this simplistic image.
Bulls (and rams) are often, but not necessarily, castrated at a
fairly advanced age, probably in part to allow the conformation and characteristics of the animal to become evident before
the decision is made. A castrated steer is called muor buoc, an
entire bull thon (men in general are likened to muor which are
usually handsome animals greatly admired on that account; an
unusually brave, strong or successful man may be called thon,
that is, “bull with testicles”). Dinka do not keep an excess of
thon, usually one per 10 to 40 cows. Stated reasons for the
castration of others are for important esthetic and cultural
reasons, to reduce fighting, for easier control, and to prevent
indiscriminant or repeat breeding of cows in heat (the latter
regarded as detrimental to pregnancy and accurate
genealogies).
Here, the ration is higher, but is not reduced singularly to aesthetic reasons.
Godfrey Leinhardt clarifies that the preferences isn’t indiscriminate, that the aesthetics is based on cattle fur configurations. And, contra to Baker’s quote, it is clarified that all cattle once dead are eaten for their meat regardless.
Francis Deng likewise estimates, based on the amount of Cattle they casual amass, that they are one of the Wealthiest in Africa by cattle count. Likewise, it distinguishes the purpose of personality oxen (of desired configuration) as a reflection of their intense investment into cattle, not neglect.
Neumann on Diffused Agriculture from the “North”?
That is, Katharina Neumann’s 2003 article on the “Late Emergence” of Agriculture. While she does review the data suggesting a late agricultural revolution, she doesn’t suggest anywhere that it was “likely” diffused from the north and rather explains it in terms in that the high mobility lifestyles of Hunter Gatherers and Pastoralists were supported better than a sedentary lifestyle due to the abundant but seasonally distributed wild plants.
She also mentioned the relative higher abundance in the Savanna over the Rainforests, which probably resulted in the continuous plant exploitation by pottery using HG from Ghana 10k ago.
Since then Neumann noted the differences between Africa and the Middle East. Not only did Pastoralism preceded agriculture, but so did pottery. Actual vessels were not seen in Europe until replacement by Middle Eastern Farmers, rather than local HG.

Since then, Pearl Millet, Rice, Yams, and Cowpeas have been confirmed to be indigenous crops to the area. This is against hypotheses of others. Multiple studies show late expansion southwards, thus likely linking them to Niger-Kongo speakers. Modern SSA genetics revealed farmer population expansion signals similar to that of Neolithic ancestry in Europeans to their own late date of agriculture in the region as well.

Renfrew

Made multiple remarks on Africa’s “exemplars”, trying to construct a sort of perpetual gap since the Paleolithic by citing Renfew’s  Neuroscience, evolution and the sapient paradox: the factuality of value and of the sacred. However, Renfrew doesn’t quite support the comparisons he made and approaches a whole different point.

The discovery of clearly intentional patterning on fragments of red ochre from the Blombos Cave (at ca 70 000 BP) is interesting when discussing the origins of symbolic expression. But it is entirely different in character, and very much simpler than the cave paintings and the small carved sculptures which accompany the Upper Palaeolithic of France and Spain (and further east in Europe) after 40 000 BP.[….]

It is important to remember that what is often termed cave art—the painted caves, the beautifully carved ‘Venus’ figurines—was during the Palaeolithic (i.e. the Pleistocene climatic period) effectively restricted to one developmental trajectory, localized in western Europe. It is true that there are just a few depictions of animals in Africa from that time, and in Australia also. But Pleistocene art was effectively restricted to Franco-Cantabria and its outliers.

It was not until towards the end of the Pleistocene period that, in several parts of the world, major changes are seen (but see  for a more nuanced view, placing more emphasis upon developments in the Late Palaeolithic). They are associated with the development of sedentism and then of agriculture and sometimes stock rearing. At the risk of falling into the familiar ‘revolutionary’ cliché, it may be appropriate to speak of the Sedentary Revolution (, ch. 7).[….] Although the details are different in each area, we see a kind of sedentary revolution taking place in western Asia, in southern China, in the Yellow River area of northern China, in Mesoamerica, and coastal Peru, in New Guinea, and in a different way in Japan ().

And just for context as to where Precolonial Africa stood, the best I can do short hand is economic growth gaps clearly being huddled between Western Countries and non-Western countries. That is, “Modern” differences in economic growth developed over the 20th century comparisons.
As for simply looking at development, the story seems to be the same at around 1500 A.D
 paints a picture of African development in 1500, both relative to the rest of the world and heterogeneity within the continent itself, using as his indicators population density, urbanization, technological advancement, and political development. Ignoring North Africa, which was generally part of the Mediterranean world, the highest levels of development by many indicators are found in Ethiopia and in the broad swathe of West African countries running from Cameroon and Nigeria eastward along the coast and the Niger river. In this latter region, the available measures show a level of development just below or sometimes equal to that in the belt of Eurasia running from Japan and China, through South Asia and the Middle East, into Europe. Depending on the index used, West Africa was above or below the level of development in the Northern Andes and Mexico. Much of the rest of Africa was at a significantly lower level of development, although still more advanced than the bulk of the Americas or Australia.
With all this said, China traditionally speaking wasn’t necessarily devoid of gruesome magical thinking as well by duration, despite traditionally being held at a higher traditional society.
Still, Rindermann has a point on the particular intensity and behavior overall for modern norms in Africa, yet it needs a stronger premise like actually tracking atmospheres of superstition.

Richard Fuerle and OOA: Morphological and Genetic Incongruencies

This is a topic I’ve been wanting to do for a while. Though it can be said that many scientists who investigate topics receive public outcry to a return of racial segregationist ideology in academia to an unfair extent. It would be odd however to apply the same towards Richard Fuerle, and not in any ironic way. He basically peddled the Carleton Coon Multiregional Theory that not even Multi-regionalists would buy, but a quick Google search will lead you to those who would (not the most unbiased group).

The intent of this article is to show how a decent chunk of Fuerle’s arguments are indeed outdated and doesn’t jive with current evidence. While not a review of the whole book, this post will demonstrate enough basic facts that should convince you to discourage his arguments.

Credentials-

First page (the hardest in my opinion) and none in biology. For reference, I encourage commenters to cite from the book if they take issue with my criticisms, as I’m only paraphrasing from this point forward simply because of this.

Bone density-

Quick and simple (and somewhat setting a pattern), this is a trait that RR has talked about in the past with others still getting it wrong. Rather than a reduced or adaptive specialization, bone density in modern European came as a result of sedentary behavior from the Neolithic.

Sedentary living among Sub Saharans is far more recent, even with crops going back several millennia B.C.E intensification wasn’t that common until plantations were used during the slave trade. Shifting Cultivation, though variable, was the norm. I’ll touch upon this in a future article on the African Neolithic.

Dentition:

One of his other pitfall were the implications pf Shovel Teeth in Modern Populations.

  1. The high rate of such is indicative of modern phylogenic ancestry, supporting the case of Asians.
  2.  The trait in Asians derives from Peking Man.

Both are pretty much refuted by archaic and modern variants being different. And contra to the expectations of his estimates of human divergences being millions of years old, Europeans are closer to Modern Africans than Neanderthals in dentition. This also refutes assertion on the primitive nature of Africans compared to other humans in the case of phylogenics. On the particular features, it’s another story.

In this case there’s no need to look any further than the works of Joel Irish, who I’m willing to bet is unparalleled in this topic in modern research.

Retention of primitive features was something that went back to African migrants into Eurasia, Homo Sapiens both recent and past having long retained archaic traits.

We recently examined whether or not a universal criterion
for dental modernity could be dened (Bailey and Hublin
2013). Like cranial morphology, dental morphology shows a
marked range of variation; so much that multiple geographic
dental patterns (e.g., Mongoloid, Proto-Sundadont, Indodont,
Sub-Saharan African, Afridont, Caucasoid, Eurodont, Sun-
dadont, Sinodont) have been identied in recent humans
(Hanihara 1969,1992; Mayhall et al. 1982; Turner 1990;
Hawkey 1998; Irish 1998,2013; Scott et al. 2013). Our
analysis conrmed that, while some populations retain higher
frequencies of ancestral (i.e., primitive) dental traits [e.g.,
Dryopithecus molar, moderate incisor shoveling (Irish 1997)]
and others show higher frequencies of recently evolved (i.e.,
derived) dental traits [e.g., double shoveling, four-cusped
lower molars (Turner 1983; Irish and Guatelli-Steinberg
2003)], all recent humans show some combination of both
primitive and derived traits (Bailey and Hublin 2013).

Africans tend to have higher frequencies in retained features, but in the context of recent Eurasian variants, this is to be expected and Irish have actually used this data to support an African dispersal.

Assuming that phenetic expression approximates genetic variation, previous dental morphological analyses of Sub-Saharan Africans by the author show they are unique among the world’s modern populations. Numerically-derived affinities, using the multivariate Mean Measure of Divergence statistic, revealed significant differences between the Sub-Saharan folk and samples from North Africa, Europe, Southeast Asia, Northeast Asia and the New World, Australia/Tasmania, and Melanesia. Sub-Saharan Africans are characterized by a collection of unique, mass-additive crown and root traits relative to these other world groups. Recent work found that the most ubiquitous of these traits are also present in dentitions of earlier hominids, as well as extinct and extant non-human primates; other ancestral dental features are also common in these forms. The present investigation is primarily concerned with this latter finding. Qualitative and quantitative comparative analyses of Plio-Pleistocene through recent samples suggest that, of all modern populations, Sub-Saharan Africans are the least derived dentally from an ancestral hominid state; this conclusion, together with data on intra- and inter-population variability and divergence, may help provide new evidence in the search for modern human origins.

 

The same was done by his colleague  who first posited an West Asian origin as Fuerle did (undoubtedly on much firmer grounds). Has recently integrated this into modern OOA.

To date, the earliest modern human fossils found outside of Africa are dated to around 90,000 to 120,000 years ago at the Levantine sites of Skhul and Qafzeh. A maxilla and associated dentition recently discovered at Misliya Cave, Israel, was dated to 177,000 to 194,000 years ago, suggesting that members of the Homo sapiens clade left Africa earlier than previously thought. This finding changes our view on modern human dispersal and is consistent with recent genetic studies, which have posited the possibility of an earlier dispersal of Homo sapiens around 220,000 years ago. The Misliya maxilla is associated with full-fledged Levallois technology in the Levant, suggesting that the emergence of this technology is linked to the appearance of Homo sapiens in the region, as has been documented in Africa.

This then smoothly glides into the next topic.

Craniofacial data-

Thus we also find that the basis of modern diversification is recent, as in below 50k in age.

On the appearance of Modern East Asian and Native Americans Traits,

Our results show strong morphological affinities
among the early series irrespective of geographical origin,
which together with the matrix analyses results
favor the scenario of a late morphological differentiation
of modern humans. We conclude that the geographic
differentiation of modern human morphology is a late
phenomenon that occurred after the initial settlement
of the Americas.

On the features of earlier Paleoamericans.

During the last two decades, the idea held by some
late 19th and early 20th century scholars (e.g., Lacerda
and Peixoto, 1876; Rivet, 1908) that early American populations
presented a distinct morphological pattern from
the one observed among recent Native Americans, has
been largely corroborated. Studies assessing the morphological
affinities of early American crania have shown
that crania dating to over seven thousand years BP generally
show a distinct morphology from that observed in
later populations. This observation is better supported in
South America, where larger samples of early specimens
are available: population samples from central Brazil
(Lagoa Santa; Neves and Hubbe, 2005; Neves et al.,
2007a) and Colombia (Bogota´ Savannah; Neves et al.,
2007b) as well as in isolated specimens from Southeast
Brazil (Capelinha; Neves et al., 2005), Northeast Brazil
(Toca dos Coqueiros; Hubbe et al., 2007) and Southern
Chile (Palli Aike; Neves et al., 1999). Distinct cranial
morphology has also been observed in early skulls from
Meso-America (Mexico; Gonzalez-Jose´ et al., 2005) and
North America (Jantz and Owsley, 2001; Powell, 2005).
This evidence has recently demonstrated that the
observed high levels of morphological diversity within
the Americas cannot simply be attributed to bias resulting
from the small available samples of early crania, as
was previously suggested (Van Vark et al., 2003).
Recent Native American cranial morphology varies
around a central tendency characterized by short and
wide neurocrania, high and retracted faces, and high
orbits and nasal apertures. In contrast, the early South and

Meso-American (hereafter Paleoamerican) crania
tend to vary around a completely different morphology:
long and narrow crania, low and projecting faces, and
low orbits and nasal apertures (Neves and Hubbe, 2005).
These differences are not subtle, being of roughly the
same magnitude as the difference observed between
recent Australian aborigines and recent East Asians
(Neves and Hubbe, 2005; Neves et al., 2007a,b; but see
Gonza´lez-Jose´ et al., 2008 for a different opinion). When
assessed within the comparative framework of worldwide
craniometric human variation, Paleoamerican groups
show morphological affinities with some Australo-Melanesian
and African samples, while Amerindian groups

Earlier waves of Native Americans were replaced by later waves of migrants from Asia with latter specializations.

The same can be demonstrated in Africa.

For the second half of the Late Pleistocene and the period pre-
ceding the Last Glacial Maximum (LGM) (i.e., MIS 3), the only two
sites with well preserved and securely dated human remains are
Nazlet Khater 2 (38 ±6 Ky, Egypt; Crevecoeur, 2008) and Hofmeyr
(36.2 ±3.3 Ky, South Africa; Grine et al., 2007). These fossils
represent additional evidence for Late Pleistocene phenotypic
variability of African sub-groups. The Hofmeyr specimen exhibits
the greatest overall similarities to early modern human specimens
from Europe rather than to Holocene San populations from the
same region (Grine et al., 2007). Moreover, the Nazlet Khater 2
specimen preserves archaic features on the cranium and the
mandible more comparable to those of Late Middle Pleistocene and

early Late Pleistocene fossils than to chronologically closer recent

African populations (Crevecoeur, 2012). These specimens represent
aspects of modern human phenotypic variation not found in cur-
rent populations. This situation seems to have lasted until the
beginning of the Holocene in the African fossil record, not only in
the northeastern part of the continent (Crevecoeur et al., 2009) but
also in the west central (Iwo Eleru, Nigeria, Harvati et al., 2011;
Stojanowski, 2014) and eastern regions (Lukenya Hill, Kenya,
Tryon et al., 2015). During the Holocene, an increased homogeni-
zation of cranio-morphological features is documented, particu-
larly within sub-Saharan Africa, with its peak during and after the
Bantu expansion from 6 Ky ago (Ribot, 2011).

Without Ambiguity, the EUP like Hofmeyr skull was found to be archaic relative to recent SSA.

Although the supraorbital torus is comparable in thickness to that in UP crania, its continuous nature represents a more archaic morphology ( 26 ). In this regard, Hofmeyr is more primitive than later sub-Saharan LSA and North African UP specimens (such as Lukenya Hill and Wadi Kubbaniya), even though they may have a somewhat thicker medial supraorbital eminence. Despite its glabellar prominence and capacious maxillary sinuses, Hofmeyr exhibits only incipient frontal sinus development, a condition that is uncommon among European UP crania ( 27 ). The mandibular ramus has a well-developed gonial angle, and the slender coronoid process is equivalent in height to the condyle. The mandibular (sigmoid) notch is deep and symmetrical, and its crest intersects the lateral third of the condyle. The anterior margin of the ramus is damaged, but it is clear that there was no retro- molar gap. The Hofmeyr molars are large. The bucco- lingual diameter of M 2 exceeds recent African and Eurasian UP sample means by more than 2 SD (table S3). Radiographs reveal cynodont molars, although pulp chamber height is likely to have been affected by the deposition of secondary dentine in these heavily worn teeth. Thus, Hofmeyr is seemingly primitive in comparison to recent African crania in a number of features, including a prominent glabella; moderately thick, continuous supraorbital tori; a tall, flat, and straight malar; a broad frontal process of the maxilla; and comparatively large molar crowns.

One of unique traits to Modern Eurasians is a measurable increase in Cranial Index.

Craniometric data have been collected from published and unpublished reports of numerous authors on 961 male and 439 female crania from various sites in Subsaharan Africa spanning the last 100 ka. All data available in the literature, irrespective of their “racial” affinities, were used to cover the prehistoric and early historic times (up to 400 a BP). Samples covering the last 400 years do not include European colonists and consist of skeletons exavated at archeological sites, collected by early European travelers and derived from anatomical collections. Cranial capacity, depending on the mode of its calculation, has decreased by 95–165 cm3 among males and by 74–106 cm3 among females between the Late Stone Age (30-2 ka BP) and modern times (last 200 years). Values of the cranial index did not show any trend over time and their averages remained in the dolichocephalic category. The decrease in cranial capacity in Subsaharan Africa is similar to that previously found in Europe, West Asia, and North Africa, but, unlike the latter, it is not accompanied by brachycephalization. © 1993 Wiley-Liss, Inc.

It’s worth noting in even Fuerle’s data, despite emphasizing this trait in a singular black example, Caucasians have a larger browridge by comparison. Black were described as small in comparison in this trait. Likewise, the data indicates that the skulls were generally smoother and rounder with more receded Cheekbones.

On a comprehensive look on how these difference, this paper seems sufficient.

Population variation.

Morphological characteristics of the orbit that are most variable among the
African, Asian, and European samples include orbital volume (obv), orbital depth (obd), basion-superior orbit (bso), and orbital breadth (obb), and are also those that contribute most to group separation in the multivariate analyses. Interorbital breadth (dkb), biorbital Samples Asian European African 20.9960 31.2139 Asian 15.4776 Samples Asian European African 1.80745 3.19353 Asian 3.70921
68 breadth (ekb), and basion-orbitale (bio) were not found to be statistically different among these samples, however the low significance value for basion-orbitale in a one-way analysis of variance (p = 0.055) indicates that some degree of divergence exists among them. Additionally, while a significance test was not carried out for “shape” of the orbital margins, it is clear that general differences exist among groups. The most notable difference is between the Asian and African samples, in which the former possesses high and narrow orbits (a more rounded shape), and the latter is characterized by lower and wider orbital margins (a more rectangular shape).

Hominin trends

This current investigation reveals that the orbital
margins vary in association with these long-term evolutionary changes, becoming
vertically shorter, horizontally elongated, more frontated, and retracted relative to basion, with a greater degree of reduction in the inferior orbital margins.

In otherwords, the Rectangular Shape of “Negroids” are a retention, but towards a baseline Sapiens trend.

The wide rectangular shape of the orbital margins resulting from a shift in relative
size of orbital height and orbital breadth is highly characteristic of anatomically modern humans from the Upper Paleolithic in Europe and Asia (chapter 5), and extant groups from Sub-Saharan Africa (chapter 3). Following the Upper Paleolithic however, the trend toward superoinferiorly shorter and more elongated orbits associated with a grade shift in craniofacial form began to reverse, and the orbital margins become taller and narrower, taking on a more rounded shape. This more recent trend has also been documented among East Asian groups dating to the Holocene (Brown & Maeda, 2004; Wu et al. 2007), and is investigated as part of a larger examination of orbital change through the European Upper Paleolithic in chapter 5 of this thesis.

On the specifics, Eurasians.

In looking at size and shape of the orbital margins it can be seen that orbital breadth does not vary in relation to cranial shape, but does decrease as the upper facial index increases, with the same being true of biorbital breadth. In contrast, orbital height is positively correlated with both shape features, which one might expect particularly in relation to the upper facial index, in which a vertical increase in facial height and decrease in facial width would be assumed to affect in a similar way these same dimensions of the orbit. However, Brown & Maeda (2004) found that throughout the Neolithic in China, orbital height increases substantially even while facial height is reduced in that region.
In nearly every case, orbital variables are more highly correlated with shape of the
face than with shape of the head, which is understandable given their inclusion in the facial framework. However, the relationship between basion-orbitale and basion-superior orbit is negatively correlated with both cranial and facial shape variables and to approximately the same degree. This is of particular interest given that the upper facial index comprises two variables that indicate the relationship between height and width of the face in the coronal plane, though measures of basion-orbitale and basion-superior orbit lie in the parasagittal plane. Orbital depth also decreases in association with increased facial height and decreased facial breadth, but is not statistically related to change in cranial shape. This too is surprising given that orbital depth might be expected to decrease more as a result of anterior-posterior shortening of the skull rather than in relation to a narrowing and elongation of the face.  104 Although the direction and magnitude of the relationship between orbital morphology and craniofacial shape largely mimics observed changes in orbital features during the last 30,000 years in Western Europe (section 5.4 above), orbital size deviates slightly from this pattern. Both orbital volume and the geometric mean of orbital height, breadth, and depth remained relatively unchanged since the Upper Paleolithic, however both show a statistically significant negative relationship to the upper facial index, meaning that as the face becomes taller and narrower, space within the orbits is diminished.
Brown and Maeda (2004) show that among skulls of Australian Aborigines and
Tohoku Japanese, which represent changing craniofacial form since the end of the
Pleistocene, orbital volume is highly correlated with supraorbital breadth, lower facial prognathism, and shape of the orbital margins. Among these crania a broader
supraorbital region, more projecting facial skeleton and lower orbital index (more
rectangular shape) are associated with a larger orbital volume. Change in these features, including a strong trend toward higher and narrower orbits, is considered to reflect a decrease in orbital volume that occurred throughout the Holocene in China (Brown & Maeda, 2004).

Africans’ Prognathism and inter Orbital breath can be accounted for here. Pg 13. Explains an association between interorbital breadth and prognathism. Within South Africans, however, wide breadth compensates for a low prognathic profile on page 229-230. In Africans, compare to African Americans, it is more variable.  On Page 216 it notes how the role for robust craniofacial features do not correlate with browridge size. Uncorrelated features can be explained by geography for instance.

Fossils-

Richard Fuerle noted the particularly archaic nature of the 100-300k Kabwe/Broken Hill skull in contrast to Modern Humans in Ethiopia. He, in totality with modern “retentions”, asserted that this proved that African pecularities were long standing and postulated that the Middle East was the actual home of human origins.

Some problems with this logic are similar findings In Europe and Asia. Despite being contemporary with Neanderthals by context, the morphology of the Ceprano skull is closer to the LCA with Sapiens.

By contrast, Rhodesiensis existed alongside others that show more marked Sapiens differientation like the South African Florisbad mention here.

Others may mention the Iwo Eleru finding. That isn’t unique to Africa either, as the Red Deer Cave people will show. On their origins.

Our analysis suggests two plausible explanations for the morphology sampled at Longlin Cave and Maludong. First, it may represent a late-surviving archaic population, perhaps paralleling the situation seen in North Africa as indicated by remains from Dar-es-Soltane and Temara, and maybe also in southern China at Zhirendong. Alternatively, East Asia may have been colonised during multiple waves during the Pleistocene, with the Longlin-Maludong morphology possibly reflecting deep population substructure in Africa prior to modern humans dispersing into Eurasia.

More specifically.

The number of Late Pleistocene hominin species and the timing of their extinction are issues receiving renewed attention following genomic evidence for interbreeding between the ancestors of some living humans and archaic taxa. Yet, major gaps in the fossil record and uncertainties surrounding the age of key fossils have meant that these questions remain poorly understood. Here we describe and compare a highly unusual femur from Late Pleistocene sediments at Maludong (Yunnan), Southwest China, recovered along with cranial remains that exhibit a mixture of anatomically modern human and archaic traits. Our studies show that the Maludong femur has affinities to archaic hominins, especially Lower Pleistocene femora. However, the scarcity of later Middle and Late Pleistocene archaic remains in East Asia makes an assessment of systematically relevant character states difficult, warranting caution in assigning the specimen to a species at this time. The Maludong fossil probably samples an archaic population that survived until around 14,000 years ago in the biogeographically complex region of Southwest China.

Subsequent studies on dentition confirm this view, along with multiple others.
Our results indicate that the Hexian teeth are metrically and morphologically primitive and overlap with H. ergaster and East Asian Early and mid-Middle Pleistocene hominins in their large dimensions and occlusal complexities. However, the Hexian teeth differ from H. ergaster in features such as conspicuous vertical grooves on the labial/buccal surfaces of the central incisor and the upper premolar, the crown outline shapes of upper and lower molars and the numbers, shapes, and divergences of the roots. Despite their close geological ages, the Hexian teeth are also more primitive than Zhoukoudian specimens, and resemble Sangiran Early Pleistocene teeth. In addition, no typical Neanderthal features have been identified in the Hexian sample. Our study highlights the metrical and morphological primitive status of the Hexian sample in comparison to contemporaneous or even earlier populations of Asia. Based on this finding, we suggest that the primitive-derived gradients of the Asian hominins cannot be satisfactorily fitted along a chronological sequence, suggesting complex evolutionary scenarios with the coexistence and/or survival of different lineages in Eurasia. Hexian could represent the persistence in time of a H. erectus group that would have retained primitive features that were lost in other Asian populations such as Zhoukoudian or Panxian Dadong. Our study expands the metrical and morphological variations known for the East Asian hominins before the mid-Middle Pleistocene and warns about the possibility that the Asian hominin variability may have been taxonomically oversimplified.
Along with this replication,
Mandibular and dental features indicate that the Hexian mandible and teeth differ from northern Chinese H. erectus and European Middle Pleistocene hominins, but show some affinities with the Early Pleistocene specimens from Africa (Homo ergaster) and Java (H. erectus), as well as the Middle-Late Pleistocene mandible from Penghu, Taiwan. Compared to contemporaneous continental Asian hominin populations, the Hexian fossils may represent the survival of a primitive hominin, with more primitive morphologies than other contemporaneous or some chronologically older Asian hominin specimens.
Finally, just to make the point.
 Our dental study reveals a mosaic of primitive and derived dental features for the Xujiayao hominins that can be summarized as follows: i) they are different from archaic and recent modern humans, ii) they present some features that are common but not exclusive to the Neanderthal lineage, and iii) they retain some primitive conformations classically found in East Asian Early and Middle Pleistocene hominins despite their young geological age.
 The age of this specimen has been updated to an upper limit of 370k
Middle to Late Pleistocene human evolution in East Asia has remained controversial regarding the extent of morphological continuity through archaic humans and to modern humans. Newly found ∼300,000-y-old human remains from Hualongdong (HLD), China, including a largely complete skull (HLD 6), share East Asian Middle Pleistocene (MPl) human traits of a low vault with a frontal keel (but no parietal sagittal keel or angular torus), a low and wide nasal aperture, a pronounced supraorbital torus (especially medially), a nonlevel nasal floor, and small or absent third molars. It lacks a malar incisure but has a large superior medial pterygoid tubercle. HLD 6 also exhibits a relatively flat superior face, a more vertical mandibular symphysis, a pronounced mental trigone, and simple occlusal morphology, foreshadowing modern human morphology. The HLD human fossils thus variably resemble other later MPl East Asian remains, but add to the overall variation in the sample. Their configurations, with those of other Middle and early Late Pleistocene East Asian remains, support archaic human regional continuity and provide a background to the subsequent archaic-to-modern human transition in the region.
This guy helped fit a sequence for these guys with East Asian and Europeans Variation in Archaics, not overlapping signficantly mind you in African Sapiens like Irhoud on the PCA to warrant and sort of validation of Fuerle.
The HLD human sample, primarily the HLD 6 skull but including
the isolated cranial, dental, and femoral remains, provides a suite
of morphological features that place it comfortably within the pre-
viously known Middle to early Late Pleistocene East Asian human
variation and trends. These Middle-to-Late Pleistocene archaic
human remains from East Asia can be grouped into four chro-
nological groups, from the earlier LantianChenjiawo, Yunxian,
and Zhoukoudian; to Hexian and Nanjing; then Chaoxian, Dali,
HLD, Jinniushan, and Panxian Dadong; and ending with Changyang,
Xuchang, and Xujiayao. They are followed in the early Late
Pleistocene by Huanglong, Luna, Fuyan, and Zhiren, which to-
gether combine archaic and modern features.
All together, what does this show? This study conveniently addresses that.
There is nonetheless substantial variation across the available
East Asian sample within and across these chronological groups
and especially in terms of individual traits and their combinations
within specimens (SI Appendix, Figs. S16 and S17 and Tables S10,
S12, and S13). However, similar variation within regions and
within site samples is evident elsewhere during the MPl (as reflected
in the persistent absence of taxonomic consensus regarding MPl
humans; see refs. 19, 23, 41, and 42), and it need not imply more
than normal variation among these fluctuating forager populations.
The growing human fossil sample from mainland East Asia,
enhanced by the HLD remains, therefore provides evidence of
continuity through later archaic humans, albeit with some degree
of variation within chronological groups. As such, the sample
follows the same pattern as the accumulating fossil evidence for
MPl (variably into the Late Pleistocene) morphological conti-
nuity within regional archaic human groups in Europe (e.g., ref.
43), Northwest Africa (e.g., ref. 44), and insular Southeast Asia
(e.g., refs. 21 and 24), as well as into early modern humans in
East Africa (e.g., ref. 45). Several divergent peripheral samples
[Denisova, Dinaledi, and Liang Bua (4648)] do not follow this
pattern, but they are best seen as interesting human evolutionary
experiments (49) and not representative of Middle to Late Pleisto-
cene human evolution. It is the core continental regions that provide
the overall pattern of human evolution during this time period and
form the background for the emergence of modern humans.
Although there is considerable interregional diversity across these
Old World subcontinental samples, primarily in details of craniofa-
cial morphology, these fossil samples exhibit similar trends in primary
biological aspects (e.g., encephalization, craniofacial gracilization).
Moreover, all of these regional groups of Middle to Late Pleistocene
human remains reinforce that the dominant pattern through archaic
humans [and variably into early modern humans through continuity
or admixture (16, 50, 51)] was one of regional population consistency
combined with global chronological trends.
Simply put, the Eurasian and African data complement each other of having “oddballs” that are less signficant in greater context of large Stone age diversity in morphology.
The next critique won’t be as broad given how they are covered in my previous work.
Pygmies and Khoi-san-
He figured Pygmies were Australopithecus admixed to explain their stature, when it was actually due to convergent recent selection in the last 20k years.
On Khoi-San, he took a partial Carleton Coon approach and claimed at least part of their ancestry comes from “Mongoloids” to account for their eyelids, skintone and head shape. Hopefully those reading have already read my review of the actual science of this matter. If not, see link above.
Genetics-
Woodley conveniently tested it out and came up with some “surprising” errors.
Fuerle has recently attempted to build a case for the existence
of multiple biological species of humans from a molecular perspective.
Fuerle used comparative genetic distance data involving various
DNA types obtained from a variety of sources for a range of
biological species and subspecies [54]. The results of his review
are summarized in the following table. Additional data involving
non-mtDNA based estimates of the genetic distance between the
gorilla species and the chimpanzees and bonobos have been included
for comparison.
Table 4 would seem to suggest that the Sub-Saharan African
(Bantu) and Australopapuan (Aborigine) genetic difference as measured
by SNP’s is greater than the genetic distance between both
the two species of gorilla (Gorilla gorilla and Gorilla beringei), and
greater than the distance between the common chimpanzee and
the bonobo as measured by mtDNA.
On the basis of this Fuerle suggests that there are only two
consistent courses of action to take regarding re-classification –
splitting or lumping. Either H. sapiens could be split into two species
– Homo africanus which would encompass modern African
populations and Homo eurasianensis which would encompass Eurasian
populations; making the genus Homo consistent in his view,
species-wise with respect to other genera in which the differences
between species are expressed in terms of much smaller genetic
distances; or alternatively the genetic variability within the human
species could be used to typologically define the absolute limits of
what constitutes a vertebrate species, which could then be employed
as a taxonomic baseline in the classification of other species.
This would mean lumping the two gorilla species and the
chimpanzee and the bonobo as single species.
Further on,
FST reflects the relative amount of total genetic differentiation
between populations, however different measures of genetic distance
involving mtDNA and autosomal loci are simply inappropriate for the purposes of inter-specific comparison as the different
genes involved will have been subject to markedly different selection
pressures and are therefore not likely to have diverged at the
same time [62]. To illustrate this point, this author listed alternative
estimates of the distance between the gorilla species and the
common chimpanzee and bonobo, based on various nuclear loci
and autosomal DNA. The much higher numbers reflect the extreme
variation that can be expected when different genes are considered.
Fuerle’s presentation of the data is also problematic for another
reason, namely he makes no mention of the current
debates surrounding gorilla and chimpanzee/bonobo taxonomy;
as new research on these taxa regularly generates novel and in
some cases wildly variable estimates of genetic distance between
these primates, and there is even some debate over whether the
eastern and western gorillas are separate species [60].
Curnoe and Thorne have estimated that periods of around two
million years were required for the production of sufficient genetic
distances to represent speciation within the human ancestral lineage
[56]. This indicates that the genetic distances between the
races are too small to warrant differentiation at the level of biological
species, as the evolution of racial variation within H. sapiens
started to occur only 60,000 years ago, when the ancestors of modern
humans first left Africa.
Summary- The current morphological data, prehistoric morphological data, and population genetics leaves the basis of Fuerle’s model of race differences in shambles. When there was a debate, it fitted nowhere from the beginning.
The shallow fringe appeal of Fuerle in actual HBD is quick to be present with either naivety or deliberate bias, which isn’t shocking given what little background the author had despite a notably large citation on his data.
This review isn’t without its inherent flaws. Primarily being driven largely by my own repugnance towards the book, despite my efforts of citations, I didn’t make use of direct quotes. This effect my negative argument toward Fuerle, potentially making straw-men of his arguments that were addressed.
I feel, however, I’ve done an adequate job of building my positive argument of better arguments in the framework of various researchers.

How Things Change: Perspectives on Intelligence in Antiquity

1300 words

The cold winter theory (CWT) is a theory that purports to explain why those whose ancestors evolved in colder climes are more “intelligent” than those whose ancestors evolved in warmer climes. Popularized by Rushton (1997), Lynn (2006), and Kanazawa (2012), the theory—supposedly—accounts for the “haves” and the “have not” in regard to intelligence. However, the theory is a just-so story, that is, it explains what it purports to explain without generating previously unknown facts not used in the construction of the theory. PumpkinPerson is irritated by people who do not believe the just-so story of the CWT writing (citing the same old “challenges” as Lynn which were dispatched by McGreal):

The cold winter theory is extremely important to HBD.  In fact I don’t even understand how one can believe in racially genetic differences in IQ without also believing that cold winters select for higher intelligence because of the survival challenges of keeping warm, building shelter, and hunting large game.

The CWT is “extremely important to HBD“, as PP claims, since there needs to be an evolutionary basis for population differences in “intelligence” (IQ). Without the just-so story, the claim that racial differences in “intelligence” are “genetically” based crumbles.

Well, here is the biggest “challenge” (all other refutations of it aside) to the CWT. Notions of which population are or are not “intelligent” change with the times. The best example is what the Greeks—specifically Aristotle—wrote about the intelligence of those who lived in the north. Maurizio Meloni, in his 2019 book Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics captures this point (pg 41-42; emphasis his):

Aristotle’s Politics is a compendium of all these ideas [Orientals being seen as “softer, more delicate and unwarlike” along with the structure of militaries], with people living in temperate (mediocriter) places presented as the most capable of producing the best political systems:

“The nations inhabiting the cold places and those of Europe are full of spirit but somewhat deficient in intelligence and skill, so that they continue comparatively free, but lacking in political organization and the capacity to rule their neighbors. The peoples of Asia on the other hand are intelligent and skillful in temperament, but lack spirit, so that they are in continuous subjection and slavery. But the Greek race participates in both characters, just as it occupies the middle position geographically, for it is both spirited and intelligent; hence it continues to be free and to have very good political institutions, and to be capable of ruling all mankind if it attains constitutional unity.” (Pol. 1327b23-33, my italics)

Views of direct environmental influence and the porosity of bodies to these effects also entered the military machines of ancient empires, like that of the Romans. Offices such as Vegetius (De re militari, I/2) suggested avoiding recruiting troops from cold climates as they had too much blood and, hence, inadequate intelligence. Instead, he argued, troops from temperate climates be recruited, as they possess the right amount of blood, ensuring their fitness for camp discipline (Irby, 2016). Delicate and effemenizing land was also to be abandoned as soon as possible, according Manilius and Caesar (ibid). Probably the most famous geopolitical dictum of antiquity reflects exactly this plastic power of places: “soft lands breed soft men”, according to the claim that Herodotus attributed to Cyrus.

Isn’t that weird, how things change? Quite obviously, which population is or is not “intelligent” is based on the time and place of the observation. Those in northern Europe, who are purported to be more intelligent than those who live in temperate, hotter climes—back in antiquity—were seen to be less intelligent in comparison to those who lived in more temperate, hotter climes. Imagine stating what Aristotle said thousands of years ago in the present day—those who push the CWT just-so story would look at you like you’re crazy because, supposedly, those who live in and evolved in colder climes had to plan ahead and faced a tougher environment in comparison to those who lived closer to the equator.

Imagine we could transport Aristotle to the present day. What would he say about our perspectives on which population is or is not intelligent? Surely he would think it ridiculous that the Greeks today are less “intelligent” than those from northern Europe. But that only speaks to how things change and how people’s perspectives on things change with the times and who is or is not a dominant group. Now imagine that we can transport someone (preferably an “IQ” researcher) to antiquity when the Greeks were at the height of their power. They would then create a just-so story to justify their observations about the intelligence of populations based on their evolutionary history.

Anatoly Karlin cites Galton, who claims that ancient Greek IQ was 125, while Karlin himself claims IQ 90. I cite Karlin’s article not to contest his “IQ estimates”—nor Galton’s—I cite it to show the disparate “estimates” of the intelligence of the ancient Greeks. Because, according to the Greeks, they occupied the middle position geographically, and so they were both spirited and intelligent compared to Asians and northern Europeans.

This is similar to Wicherts, Boorsboom, and Dolan (2010) who responded to Rushton, Lynn, and Templer. They state that the socio-cultural achievements of Mesopotamia and Egypt stand in “stark contrast to the current low level of national IQ of peoples of Iraq and Egypt and that these ancient achievements appear to contradict evolutionary accounts of differences in national IQ. One can make a similar observation about the Maya. Their cultural achievements stand in stark contrast to their “evolutionary history” in warm climes. The Maya were geographically isolated from other populations and they still created a writing system (independently) along with other cultural achievements that show that “national IQs” are irrelevant to what the population achieved. I’m sure an IQ-ist can create a just-so story to explain this away, but that’s not the point.

Going back to what Karlin and Galton stated about Greek IQ, their IQ is irrelevant to their achievements. Whether or not their IQ was 120-125 or 90 is irrelevant to what they achieved. To the Mesopotamians and Egyptians, they were more intelligent than those from northern climes. They would, obviously, think that based on their achievements and the lack of achievements in the north. The achievements of peoples in antiquity would paint a whole different picture in regard to an evolutionary theory of human intelligence—and its distribution in human populations.

So which just-so story (ad hoc hypothesis) should we accept? Or should we just accept that which population is or is not “intelligent” and capable of constructing militaries is contingent based on the time and the place of the observation? Looking at “national IQs” of peoples in antiquity would show a huge difference in comparison to what we observe today about the “national IQs” (supposedly ‘intelligence’) of populations around the world. In antiquity, those who lived in temperate and even hotter climes had greater achievements than others. Greeks and Romans argued that peoples from northern climes should not be enlisted in the military due to where they were from.

These observations from the Greeks and Romans about who and who not to enlist in the military, along with their thoughts on Northern Europeans prove that perspectives on which population is or is not “intelligent” is contingent based on the time and place. This is why “national IQs” should not be accepted, not even accounting for the problems with the data (Richardson, 2004; also see Morse, 2008; also see The Ethics of Development: An Introduction by Ingram and Derdak, 2018). Seeing the development of countries/populations in antiquity would lead to a whole different evolutionary theory of the intelligence of populations, proving the contingency of the observations.

Jean Baptiste Lamarck

Eva Jablonka

Charles Murray

Arthur Jensen

Blog Stats

  • 577,827 hits
Follow NotPoliticallyCorrect on WordPress.com