Home » HBD
Category Archives: HBD
Genetic reductionism refers to the belief that understanding our genes will have us understand everything from human behavior to disease. The behavioral genetic approach claims to be the best way to parse through social and biological causes of health, disease, and behavior. The aim of genetic reductionism is to reduce a complex biological system to the sum of its parts. While there was some value in doing so when our technology was in its infancy and we did learn a lot about what makes us “us”, the reductionist paradigm has outlived its usefulness.
If we want to understand a complex biological system then we shouldn’t use gene scores, heritability estimates, or gene sequencing. We should be attempting to understand how the whole biological system interacts with its surroundings—its environment.
Reductionists may claim that “gene knockout” studies can point us in the direction of genetic causation—“knockout” a gene and, if there are any changes, then we can say that that gene caused that trait. But is it so simple? Richardson (2000) puts it well:
All we know for sure is that rare changes, or mutations, in certain single genes can drastically disrupt intelligence, by virtue of the fact that they disrupt the whole system.
Noble (2011) writes:
Differences in DNA do not necessarily, or even usually, result in differences in phenotype. The great majority, 80%, of knockouts in yeast, for example, are normally ‘silent’ (Hillenmeyer et al. 2008). While there must be underlying effects in the protein networks, these are clearly buffered at the higher levels. The phenotypic effects therefore appear only when the organism is metabolically stressed, and even then they do not reveal the precise quantitative contributions for reasons I have explained elsewhere (Noble, 2011). The failure of knockouts to systematically and reliably reveal gene functions is one of the great (and expensive) disappointments of recent biology. Note, however, that the disappointment exists only in the gene-centred view. By contrast it is an exciting challenge from the systems perspective. This very effective ‘buffering’ of genetic change is itself an important systems property of cells and organisms.
Moreover, even when a difference in the phenotype does become manifest, it may not reveal the function(s) of the gene. In fact, it cannot do so, since all the functions shared between the original and the mutated gene are necessarily hidden from view. … Only a full physiological analysis of the roles of the protein it codes for in higher-level functions can reveal that. That will include identifying the real biological regulators as systems properties. Knockout experiments by themselves do not identify regulators (Davies, 2009).
All knocking-out or changing genes/alleles will do is show us that T is correlated with G, not that T is caused by G. Merely observing a correlation between a change in genes or knocking genes out will tell us nothing about biological causation. Reductionism will not have us understand the etiology of disease as the discipline of physiology is not reductionist at all—it is a holistic discipline.
Lewontin (2000: 12) writes in the introduction to The Ontogeny of Information: “But if I successfully perform knockout experiments on every gene that can be seen in such experiments to have an effect on, say, wing shape, have I even learned what causes the wing shape of one species or individual to differ from that of another? After all, two species of Drosophilia presumably have the same relevant set of loci.”
But the loss of a gene can be compensated by another gene—a phenomenon known as genetic compensation. In a complex bio-system, when one gene is knocked out, another similar gene may take the ‘role’ of the knocked-out gene. Noble (2006: 106-107) explains:
Suppose there are three biochemical pathways A, B, and C, by which a particular necessary molecule, such as a hormone, can be made in the body. And suppose the genes for A fail. What happens? The failure of the A genes will stimulate feedback. This feedback will affect what happens with the sets of genes for B and C. These alternate genes will be more extensively used. In the jargon, we have here a case of feedback regulation; the feedback up-regulates the expression levels of the two unaffected genes to compensate for the genes that got knocked out.
Clearly, in this case, we can compensate for two such failures and still be functional. Only if all three mechanisms fail does the system as a whole fail. The more parallel compensatory mechanisms an organism has, the more robust (fail-safe) will be its functionality.
The Neo-Darwinian Synthesis has trouble explaining such compensatory genetic mechanisms—but the systems view (Developmental Systems Theory, DST) does not. Even if a knockout affects the phenotype, we cannot say that that gene outright caused the phenotype, the system was screwed up, and so it responded in that way.
Genetic networks and their role in development became clear when geneticists began using genetic knockout techniques to disable genes which were known to be implicated in the development of characters but the phenotype remained unchanged—this, again, is an example of genetic compensation. Jablonka and Lamb (2005: 67) describe three reasons why the genome can compensate for the absence of a particular gene:
first, many genes have duplicate copies, so when both alleles of one copy are knocked out, the reserve copy compensates; second, genes that normally have other functions can take the place of a gene that has been knocked out; and third, the dynamic regulatory structure of the network is such that knocking out single components is not felt.
Using Waddington’s epigenetic landscape example, Jablonka and Lamb (2005: 68) go on to say that if you knocked a peg out, “processes that adjust the tension on the guy ropes from other pegs could leave the landscape essentially unchanged, and the character quite normal. … If knocking out a gene completely has no detectable effect, there is no reason why changing a nucleotide here and there should necessarily make a difference. The evolved network of interactions that underlies the development and maintenance of every character is able to accommodate or compensate for many genetic variations.”
“multiple alternative pathways . . . are the rule rather than the exception . . . such pathways can continue to function despite amino acid changes that may impair one intermediate regulator. Our results underscore the importance of systems biology approaches to understand functional and evolutionary constraints on genes and proteins.” (Quoted in Richardson, 2017: 132)
When it comes to disease, genes are said to be difference-makers—that is, the one gene difference/mutation is what is causing the disease phenotype. Genes, of course, interact with our lifestyles and they are implicated in the development of disease—as necessary, not sufficient, causes. GWA studies (genome-wide association studies) have been all the rage for the past ten or so years. And, to find diseases ‘associated’ with disease, GWA practioners take healthy people and diseased people, sequence their genomes and they then look for certain alleles that are more common in one group over the other. Alleles more common in the disease group are said to be ‘associated’ with the disease while alleles more common in the control group can be said to be protective of the disease (Kampourakis, 2017: 102). (This same process is how ‘intelligence‘ is GWASed.)
“Disease is a character difference” (Kampourakis, 2017: 132). So if disease is a character difference and differences in genes cannot explain the existence of different characters but can explain the variation in characters then the same must hold for disease.
“Gene for” talk is about the attribution of characters and diseases to DNA, even thoughit is not DNA that is directly responsible for them. … Therefore, if many genes produce or affect the production of the protein that in turn affects a character or disease, it makes no sense to identify one gene as the gene responsible “for” this character or disease. Single genes do not produce characters or disease …(Kampourakis, 2017: 134-135)
This all stems from the “blueprint metaphor”—the belief that the genome contains a blueprint for form and development. There are, however, no ‘genes for’ character or disease, therefore, genetic determinism is false.
Genes, in fact, are weakly associated with disease. A new study (Patron et al, 2019) analyzed 569 GWA studies, looking at 219 different diseases. David Scott (one of the co-authors) was interviewed by Reuters where he said:
“Despite these rare exceptions [genes accounting for half of the risk of acquiring Crohn’s, celiac and macular degeneration], it is becoming increasingly clear that the risks for getting most diseases arise from your metabolism, your environment, your lifestyle, or your exposure to various kinds of nutrients, chemicals, bacteria, or viruses,” Wishart said.
“Based on our results, more than 95% of diseases or disease risks (including Alzheimer’s disease, autism, asthma, juvenile diabetes, psoriasis, etc.) could NOT be predicted accurately from SNPs.”
It seems like this is, yet again, another failure of the reductionist paradigm. We need to understand how genes interact in the whole human biological system, not reducing our system to the sum of its parts (‘genes’). Programs like this are premised on reductionist assumptions; it seems intuitive to think that many diseases are ’caused by’ genes, as if genes are ‘in control’ of development. However, what is truly ‘in control’ of development is the physiological system—where genes are used only as resources, not causes. The reductionist (neo-Darwinist) paradigm cannot really explain genetic compensation after knocking out genes, but the systems view can. The amazing complexity of complex bio-systems allows them to buffer against developmental miscues and missing genes in order to complete the development of the organism.
Genes are not active causes, they are passive causes, resources—they, therefore, cannot cause disease and characters.
HBDers like to talk about this perception that their ideas are not really discussed in the public discourse; that the truth is somehow withheld from the public due to a nefarious plot to shield people from the truth that they so heroically attempt to get out to the dumb masses. They like to claim that the field and its practitioners are ‘silenced’, that they are rejected outright for ‘wrongthink’ ideas they hold. But if we look at what kinds of studies get out to the public, a different picture emerges.
The title of Cofnas’ (2019) paper is Research on group differences in intelligence: A defense of free inquiry; the title of Carl’s (2018) paper is How Stifiling Debate Around Race, Genes and IQ Can Do Harm; and the title of Meisenberg’s (2019) paper is Should Cognitive Differences Research Be Forbidden? Meisenberg’s paper is the most direct response to my most recent article, an argument to ban IQ tests due to the class/racial bias they hold that then may be used to enact undesirable consequences on a group that scores low—but like all IQ-ists, they assume that IQ tests are tests of intelligence which is a dubious assumption. In any case, these three authors seem to think there is a silencing of their work.
For Darwin200 (his 200th birthday) back in 2009, the question “Should scientists study race and IQ” was asked in the journal Nature. Neuroscientist Steven Rose (2009: 788) stated “No”, writing:
The problem is not that knowledge of such group intelligence differences is too dangerous, but rather that there is no valid knowledge to be found in this area at all. It’s just ideology masquerading as science.
Ceci and Williams (2009: 789) answered “Yes” to the question, writing:
When scientists are silenced by colleagues, administrators, editors and funders who think that simply asking certain questions is inappropriate, the process begins to resemble religion rather than science. Under such a regime, we risk losing a generation of desperately needed research.
John Horgan wrote in Scientific American:
But another part of me wonders whether research on race and intelligence—given the persistence of racism in the U.S. and elsewhere–should simply be banned. I don’t say this lightly. For the most part, I am a hard-core defender of freedom of speech and science. But research on race and intelligence—no matter what its conclusions are—seems to me to have no redeeming value.
And when he says that “research on race and intelligence … should simply be banned“, he means:
Institutional review boards (IRBs), which must approve research involving human subjects carried out by universities and other organizations, should reject proposed research that will promote racial theories of intelligence, because the harm of such research–which fosters racism even if not motivated by racism–far outweighs any alleged benefits. Employing IRBs would be fitting, since they were formed in part as a response to the one of the most notorious examples of racist research in history, the Tuskegee Syphilis Study, which was carried out by the U.S. Public Health Service from 1932 to 1972.
At the end of the 2000s, journalist William Saletan was big in the ‘HBD-sphere’ due to his writings on sport and race and race and IQ. But in 2018 after the Harris/Murray fiasco on Harris’ podcast, Saletan wrote:
Many progressives, on the other hand, regard the whole topic of IQ and genetics and sinister. That too is a mistake. There’s a lot of hard science here. It can’t be wished away, and it can be put to good use. The challenge is to excavate that science from the muck of speculation about racial hierarchies.
What’s the path forward? It starts with letting go of race talk. No more podcasts hyping gratuitous racial comparisons as “forbidden knowledge.” No more essays speaking of grim ethnic truths for which, supposedly, we must prepare. Don’t imagine that if you posit an association between race and some trait, you can add enough caveats to erase the impression that people can be judged by their color. The association, not the caveats, is what people will remember.
If you’re interested in race and IQ, you might bristle at these admonitions. Perhaps you think you’re just telling the truth about test scores, IQ heritability, and the biological reality of race. It’s not your fault, you might argue, that you’re smeared and misunderstood. Harris says all of these things in his debate with Klein. And I cringe as I hear them, because I know these lines. I’ve played this role. Harris warns Klein that even if we “make certain facts taboo” and refuse “to ever look at population differences, we will be continually ambushed by these data.” He concludes: “Scientific data can’t be racist.”
Of course “scientific data can’t be racist”, but the data can be used by racists for racist motives and the tool to collect the data could be inherently biased against certain groups meaning they favor certain groups too.
Saletan claims that IQ tests can be ‘put to good use’, but it is “illogical” to think that the use of IQ tests was negative and then positive in other instances; it’s either one or the other, you cannot hold that IQ testing is good here and bad there.
Callier and Bonham (2015) write:
These types of assessments cannot be performed in a vacuum. There is a broader social context with which all investigators must engage to create meaningful and translatable research findings, including intelligence researchers. An important first step would be for the members of the genetics and behavioral genetics communities to formally and directly confront these challenges through their professional societies and the editorial boards of journals.
If traditional biases triumph over scientific rigor, the research will only exacerbate existing educational and social disparities.
Tabery (2015) states that:
it is important to remember that even if the community could keep race research at bay and out of the newspaper headlines, research on the genetics of intelligence would still not be expunged of all controversy.
IQ “science” is a subfield of behavioral genetics; so the overarching controversy is on ‘behavioral genetics’ (see Panofsky, 2014). You would expect there to hardly be any IQ research reported in mainstream outlets with how Cofnas (2019), Carl (2018) and Meisenberg (2019) talk about race and IQ. But that’s not what we find. What we find when we look at what is published regarding behavioral genetic studies compared to regular genetic studies is a stark contrast.
Society at large already harbors genetic determinist attitudes and beliefs, and what the mainstream newspapers put out then solidifies the false beliefs of the populace. Even then, a more educated populace in regard to genes and trait ontogeny will not necessarily make them supportive of new genetics research and discoveries; they are even critical of such studies (Etchegary et al, 2012). Schweitzer and Saks (2007) showed that the popular show CSI pushes false concepts of genetic testing on the public, showing that DNA testing is quick, reliable, and prosecutes many cases; about 40 percent of the ‘science’ used on CSI does not exist and this, too, promulgates false beliefs about genetics in society. Lewis et al (2000) asked schoolchildren “Why are genes important?”, to which 73 percent responded that they are important because they determine characters while 14 percent responded that they are important because they transfer information, but none spoke of gene products.
In the book Genes, Determinism and God, Denis Alexander (2017: 18) states that:
Much data suggest that the stories promulgated by the kind of ‘elite media’ stories cited previously do not act as ‘magic bullets’ to be instantly absorbed by the reader, but rather are resisted, critiqued or accepted depending on the reader’s economic interests, health and social status and access to competing discourses. A recurring theme is that people dis[lplay a ‘two-track model’ in which they can readily switch between more genetic deterministic explanations for disease or different behaviors and those which favour environmental factors or human choice (Condit et al., 2009).
The so-called two-track model is simple: one holds genetic determinist beliefs for a certain trait, like heart disease or diabetes, but then contradict themselves and state that diet and exercise can ameliorate any future complications (Condit, 2010). Though, holding “behavioral causal beliefs” (that one’s behavior is causal in regard to disease acquisition) is associated with behavioral change (Nguyen et al, 2015). This seems to be an example of what Bo Wingard means when he uses the term “selective blank slatism” or “ideologically motivated blank slatism.” That one’s ideology motivates them to believe that genes are causal regarding health, intelligence, and disease or reject the claim must be genetically mediated too. So how can we ever have objective science if people are biased by their genetics?
Condit (2011: 625) compiled a chart showing people’s attitudes to how ‘genetic’ a trait is or not:
Clearly, the public understands genes as playing more of a role when it comes to bodily traits and environment plays more of a role when it comes to things that humans have agency over—for things relating to the mind (Condit and Shen, 2011). “… people seem to deploy elements of fatalism or determinism into their worldviews or life goals when they suit particular ends, either in ways that are thought to ‘explain’ why other groups are the way they are or in ways that lessen their own sense of personal responsibility (Condit, 2011)” (Alexander, 2017: 19).
So, behavioral geneticists must be silenced, right? Bubela and Caufield (2004: 1402) write:
Our data may also indicate a more subtle form of media hype, in terms of what research newspapers choose to cover. Behavioural genetics and neurogenetics were the subject of 16% of the newspaper articles. A search of PubMed on May 30, 2003, with the term “genetics” yielded 1 175 855 hits, and searches with the terms “behavioural genetics” and “neurogenetics” yielded a total of 3587 hits (less than 1% of the hits for “genetics”).
So Bubela and Caufield (2004) found that 11 percent of the articles they looked at had moderately to highly exaggerated claims, while 26 percent were slightly exaggerated. Behavioral genetic/neurogenetic stories comprised 16 percent of the articles they found, while one percent of all academic press articles were on genetics, which “might help explain why the reader gains the impression that much of genetics research is directed towards explaining human behavior; such copy makes newsworthy stories for obvious reasons” (Alexander, 2017: 17-18). Behavioral genetics research is indeed silenced!
The public perception of genetics seems to line-up with that of genetics researchers in some ways but not in others. The public at large is bombarded with numerous messages per day, especially in the TV programs they watch (inundated with ad after ad). Certain researchers claim that ‘free inquiry’ into race and IQ is being hushed. To Cofnas (2019) I would say, “In virtue of what is it ‘free inquiry’ that we should study how a group handles an inherently biased test?” To Carl (2018) I would say, “What about the harm done assuming that the hereditarian hypothesis is true, that IQ tests test intelligence, and the funneling of minority children into EMR classes?” And to Meisenberg (2019) I would say “The answer to the question “Should research into cognitive differences be forbidden?” should be “Yes, they should be forbidden and banned since no good can come from a test that was biased from its very beginnings.” There is no ‘good’ that can come from using inherently biased tests, which is why the hereditarian-environmentalist debate on IQ is useless.
It is due to newspapers and other media outlets that people hold the beliefs on genetics they do. Behavioral genetics studies are overrepresented in newspapers; IQ is a subfield of behavioral genetics. Is contemporary research ignored in the mainstream press? Not at all. Recent articles on the social stratification of Britain have been in the mainstream press—so what are Cofnas, Carl, and Meisenberg complaining about? It seems it just stems from a persecution complex; to be seen as the new ‘Galileo’ who, in the face of oppression told the truth that others did not want to hear so they attempted to silence him.
Well that’s not what is going on here, as behavioral genetic studies are constantly pushed in the mainstream press; the complaining from the aforementioned authors means nothing; look to the newspapers and the public’s perception of genes to see that the claims from the authors are false.
Project Coast was a secret biological/chemical weapons program developed by the apartheid government in South Africa started by a cardiologist named Wouter Basson. One of the many things they attempted was to develop a bio-chemical weapon that targets blacks and only blacks.
I used to listen to the Alex Jones show in the beginning of the decade and in one of his rants, he brought up Project Coast and how they attempted to develop a weapon to only target blacks. So I looked into it, and there is some truth to it.
For instance, The Washington Times writes in their article Biotoxins Fall Into Private Hands:
More sinister were the attempts — ordered by Basson — to use science against the country’s black majority population. Daan Goosen, former director of Project Coast’s biological research division, said he was ordered by Basson to develop ways to suppress population growth among blacks, perhaps by secretly applying contraceptives to drinking water. Basson also urged scientists to search for a “black bomb,” a biological weapon that would select targets based on skin color, he said.
“Basson was very interested. He said ‘If you can do this, it would be very good,'” Goosen recalled. “But nothing came of it.”
They created novel ways to disperse the toxins: using letters and cigarettes to transmit anthrax to black communities (something those old enough to be alive during 911 know of), lacing sugar cubes with salmonella, lacing beer and peppermint candy with poison.
Project Coast was, at its heart, a eugenics program (Singh, 2008). Singh (2008: 9) writes, for example that “Project Coast also speaks for the need for those involved in scientific research and practice to be sensitized to appreciate the social circumstances and particular factors that precipitate a loss of moral perspective on one’s actions.”
Jackson (2015) states that another objective of the Project was to develop anti-fertility drugs and attempt to distribute them into the black population in South Africa to decrease birth rates. They also attempted to create vaccines to make black women sterile to decrease the black population in South Africa in a few generations—along with attempting to create weapons to only target blacks.
The head of the weapons program, Wouter Basson, is even thought to have developed HIV with help from the CIA to cull the black population (Nattrass, 2012). There are many conspiracy theories that involve HIV and its creation to cull black populations, though they are pretty farfetched. In any case, though, since they were attempting to develop new kinds of bioweapons to target certain populations, it’s not out of the realm of possibility that there is a kernel of truth to the story.
So now we come to today. So Kyle Bass said that the Chinese already have access to all of our genomes, through companies like Steve Hsu’s BGI, stating that “there’s a Chinese company called BGI that does the overwhelming majority of all the sequencing of U.S. genes. … China had the genomic sequence of every single person that’s been gene types in the U.S., and they’re developing bio weapons that only affect Caucasians.”
I have no way to verify these claims (they’re probably bullshit), but with what went on in the 80s and 90s in South Africa with Project Coast, I don’t believe it’s outside of the realm of plausibility. Though Caucasians are a broad grouping.
It’d be like if someone attempted to develop a bioweapon that only targets Ashkenazi Jews. They could let’s say, attempt to make a bioweapon to target those with Tay Sach’s disease. It’s, majorly, a Jewish disease, though it’s also prevalent in other populations, like French Canadians. It’d be like if someone attempted to develop a bioweapon that only targets those with the sickle cell trait (SCT). Certain African ethnies are more like to carry the trait, but it’s also prevalent in southern Europe and Northern Africa since the trait is prevalent in areas with many mosquitoes.
With Chinese scientists like He Jiankui CRISPR-ing two Chinese twins back in 2018 to attempt to edit their genome to make them less susceptible to HIV, I can see a scientist in China attempt to do something like this. In our increasingly technological world with all of these new tools we develop, I would be surprised if there was nothing strange like this going on.
Some claim that “China will always be bad at bioethics“:
Even when ethics boards exist, conflicts of interest are rife. While the Ministry of Health’s ethics guidelines state that ethical reviews are “based upon the principles of ethics accepted by the international community,” they lack enforcement mechanisms and provide few instructions for investigators. As a result, the ethics review process is often reduced to a formality, “a rubber stamp” in Hu’s words. The lax ethical environment has led many to consider China the “Wild East” in biomedical research. Widely criticized and rejected by Western institutions, the Italian surgeon Sergio Canavero found a home for his radical quest to perform the first human head transplant in the northern Chinese city of Harbin. Canavero’s Chinese partner, Ren Xiaoping, although specifying that human trials were a long way off, justified the controversial experiment on technological grounds, “I am a scientist, not an ethical expert.” As the Chinese government props up the pseudoscience of traditional Chinese medicine as a valid “Eastern” alternative to anatomy-based “Western” medicine, the utterly unscientific approach makes the establishment of biomedical regulations and their enforcement even more difficult.
Chinese ethicists, though, did respond to the charge of a ‘Wild East’, writing:
Some commentators consider Dr. He’s wrongdoings as evidence of a “Wild East” in scientific ethics or bioethics. This conclusion is not based on facts but on stereotypes and is not the whole story. In the era of globalization, rule-breaking is not limited to the East. Several cases of rule-breaking in research involved both the East and the West.
Henning (2006) notes that “bioethical issues in China are well covered by various national guidelines and regulations, which are clearly defined and adhere to internationally recognized standards. However, the implementation of these rules remains difficult, because they provide only limited detailed instructions for investigators.” With a large country like China, of course, it will be hard to implement guidelines on a wide-scale.
Gene-edited humans were going to come sooner or later, but the way that Jiankui went about it was all wrong. Jiankjui raised funds, dodged supervision and organized researchers in order to carry out the gene-editing on the Chinese twins. “Mad scientists” are, no doubt, in many places in many countries. “… the Chinese state is not fundamentally interested in fostering a culture of respect for human dignity. Thus, observing bioethical norms run second.”
Countries attempting to develop bioweapons to target specific groups of people have already been attempted recently, so I wouldn’t doubt that someone, somewhere, is attempting something along these lines. Maybe it is happening in China, a ‘Wild East’ of low regulations and oversight. There is a bioethical divide when it comes to East and West, which I would chalk up to differences in collectivism vs individualism (which some have claimed to be ‘genetic’ in nature; Kiaris, 2012). Since the West is more individualistic, they would care about individual embryos which eventually become a person; since the East is more collectivist, whatever is better for the group (that is, whatever can eventually make the group ‘better’) will override the individual and so, tinkering with individual genomes would be seen as less of an ethical charge to them.
Jewish IQ is one of the most-talked-about things in the hereditarian sphere. Jews have higher IQs, Cochran, Hardy, and Harpending (2006: 2) argue due to “the unique demography and sociology of Ashkenazim in medieval Europe selected for intelligence.” To IQ-ists, IQ is influenced/caused by genetic factors—while environment accounts for only a small portion.
“Fourth, other environmentalists such as Majoribanks (1972) have argued that the high intelligence of the Ashkenazi Jews is attributable to the typical “pushy Jewish mother”. In a study carried out in Canada he compared 100 Jewish boys aged 11 years with 100 Protestant white gentile boys and 100 white French Canadians and assessed their mothers for “Press for Achievement”, i.e. the extent to which mothers put pressure on their sons to achieve. He found that the Jewish mothers scored higher on “Press for Achievement” than Protestant mothers by 5 SD units and higher than French Canadian mothers by 8 SD units and argued that this explains the high IQ of the children. But this inference does not follow. There is no general acceptance of the thesis that pushy mothers can raise the IQs of their children. Indeed, the contemporary consensus is that family environmental factors have no long term effect on the intelligence of children (Rowe, 1994).
The inference is a modus ponens:
P1 If p, then q.
C Therefore q.
Let p be “Jewish mothers scored higher on “Press for Achievement” by X SDs” and let q be “then this explains the high IQ of the children.”
So now we have:
Premise 1: If “Jewish mothers scored higher on “Press for Achievement” by X SDs”, then “this explains the high IQ of the children.”
Premise 2: “Jewish mothers scores higher on “Press for Achievement” by X SDs.”
Conclusion: Therefore, “Jewish mothers scoring higher on “Press for Achievement” by X SDs” so “this explains the high IQ of the children.”
Vaughn (2008: 12) notes that an inference is “reasoning from a premise or premises to … conclusions based on those premises.” The conclusion follows from the two premises, so how does the inference not follow?
IQ tests are tests of specific knowledge and skills. It, therefore, follows that, for example, if a “mother is pushy” and being pushy leads to studying more then the IQ of the child can be raised.
Looking at Lynn’s claim that “family environmental factors have no long term effect on the intelligence of children” is puzzling. Rowe relies heavily on twin and adoption studies which have false assumptions underlying them, as noted by Richardson and Norgate (2005), Moore (2006), Joseph (2014), Fosse, Joseph, and Richardson (2015), Joseph et al (2015). The EEA is false so we, therefore, cannot accept the genetic conclusions from twin studies.
Lynn and Kanazawa (2008: 807) argue that their “results clearly support the high intelligence theory of Jewish achievement while at the same time provide no support for the cultural values theory as an explanation for Jewish success.” They are positing “intelligence” as an explanatory concept, though Howe (1988) notes that “intelligence” is “a descriptive measure, not an explanatory concept.” “Intelligence, says Howe (1997: ix) “is … an outcome … not a cause.” More specifically, it is an outcome of development from infancy all the way up to adulthood and being exposed to the items on the test. Lynn has claimed for decades that high intelligence explains Jewish achievement. But whence came intelligence? Intelligence develops throughout the life cycle—from infancy to adolescence to adulthood (Moore, 2014).
Ogbu and Simon (1998: 164) notes that Jews are “autonomous minorities”—groups with a small number. They note that “Although [Jews, the Amish, and Mormons] may suffer discrimination, they are not totally dominated and oppressed, and their school achievement is no different from the dominant group (Ogbu 1978)” (Ogbu and Simon, 1998: 164). Jews are voluntary minorities, and voluntary minorities, according to Ogbu (2002: 250-251; in Race and Intelligence: Separating Science from Myth) suggests five reasons for good test performance from these types of minorities:
- Their preimmigration experience: Some do well since they were exposed to the items and structure of the tests in their native countries.
- They are cognitively acculturated: They acquired the cognitive skills of the white middle-class when they began to participate in their culture, schools, and economy.
- The history and incentive of motivation: They are motivated to score well on the tests as they have this “preimmigration expectation” in which high test scores are necessary to achieve their goals for why they emigrated along with a “positive frame of reference” in which becoming successful in America is better than becoming successful at home, and the “folk theory of getting ahead in the United States”, that their chance of success is better in the US and the key to success is a good education—which they then equate with high test scores.
So if ‘intelligence’ is a test of specific culturally-specific knowledge and skills, and if certain groups are exposed more to this knowledge, it then follows that certain groups of people are better-prepared for test-taking—specifically IQ tests.
The IQ-ists attempt to argue that differences in IQ are due, largely, to differences in ‘genes for’ IQ, and this explanation is supposed to explain Jewish IQ, and, along with it, Jewish achievement. (See also Gilman, 2008 and Ferguson, 2008 for responses to the just-so storytelling from Cochran, Hardy, and Harpending, 2006.) Lynn, purportedly, is invoking ‘genetic confounding’—he is presupposing that Jews have ‘high IQ genes’ and this is what explains the “pushiness” of Jewish mothers. The Jewish mothers then pass on their “genes for” high IQ—according to Lynn. But the evolutionary accounts (just-so stories) explaining Jewish IQ fail. Ferguson (2008) shows how “there is no good reason to believe that the argument of [Cochran, Hardy, and Harpending, 2006] is likely, or even reasonably possible.” The tall-tale explanations for Jewish IQ, too, fail.
Prinz (2014: 68) notes that Cochran et al have “a seductive story” (aren’t all just-so stories seductive since they are selected to comport with the observation? Smith, 2016), while continuing (pg 71):
The very fact that the Utah researchers use to argue for a genetic difference actually points to a cultural difference between Ashkenazim and other groups. Ashkenazi Jews may have encouraged their children to study maths because it was the only way to get ahead. The emphasis remains widespread today, and it may be the major source of performance on IQ tests. In arguing that Ashkenazim are genetically different, the Utah researchers identify a major cultural difference, and that cultural difference is sufficient to explain the pattern of academic achievement. There is no solid evidence for thinking that the Ashkenazim advantage in IQ tests is genetically, as opposed to culturally, caused.
Nisbett (2008: 146) notes other problems with the theory—most notably Sephardic over-achievement under Islam:
It is also important to the Cochran theory that Sephardic Jews not be terribly accomplished, since they did not pass through the genetic filter of occupations that demanded high intelligence. Contemporary Sephardic Jews in fact do not seem to haave unusally high IQs. But Sephardic Jews under Islam achieved at very high levels. Fifteen percent of all scientists in the period AD 1150-1300 were Jewish—far out of proportion to their presence in the world population, or even the population of the Islamic world—and these scientists were overwhelmingly Sephardic. Cochran and company are left with only a cultural explanation of this Sephardic efflorescence, and it is not congenial to their genetic theory of Jewish intelligence.
Finally, Berg and Belmont (1990: 106) note that “The purpose of the present study was to clarify a possible misinterpretation of the results of Lesser et al’s (1965) influential study that suggested that existence of a “Jewish” pattern of mental abilities. In establishing that Jewish children of different socio-cultural backgrounds display different patterns of mental abilities, which tend to cluster by socio-cultural group, this study confirms Lesser et al’s position that intellectual patterns are, in large part, culturally derived.” Cultural differences exist; cultural differences have an effect on psychological traits; if cultural differences exist and cultural differences have an effect on psychological traits (with culture influencing a population’s beliefs and values) and IQ tests are culturally-/class-specific knowledge tests, then it necessarily follows that IQ differences are cultural/social in nature, not ‘genetic.’
In sum, Lynn’s claim that the inference does not follow is ridiculous. The argument provided is a modus ponens, so the inference does follow. Similarly, Lynn’s claim that “pushy Jewish mothers” don’t explain the high IQs of Jews doesn’t follow. If IQ tests are tests of middle-class knowledge and skills and they are exposed to the structure and items on them, then it follows that being “pushy” with children—that is, getting them to study and whatnot—would explain higher IQs. Lynn’s and Kanazawa’s assertion that “high intelligence is the most promising explanation of Jewish achievement” also fails since intelligence is not an explanatory concept—a cause—it is a descriptive measure that develops across the lifespan.
Why do some groups of people use chopsticks and others do not? Years back, created a thought experiment. So he found a few hundred students from a university and gathered DNA samples from their cheeks which were then mapped for candidate genes associated with chopstick use. Come to find out, one of the associated genetic markers was associated with chopstick use—accounting for 50 percent of the variation in the trait (Hamer and Sirota, 2000). The effect even replicated many times and was highly significant: but it was biologically meaningless.
One may look at East Asians and say “Why do they use chopsticks” or “Why are they so good at using them while Americans aren’t?” and come to such ridiculous studies such as the one described above. They may even find an association between the trait/behavior and a genetic marker. They may even find that it replicates and is a significant hit. But, it can all be for naught, since population stratification reared its head. Population stratification “refers to differences in allele frequencies between cases and controls due to systematic differences in ancestry rather than association of genes with disease” (Freedman et al, 2004). It “is a potential cause of false associations in genetic association studies” (Oetjens et al, 2016).
Such population stratification in the chopsticks gene study described above should have been anticipated since they studied two different populations. Kaplan (2000: 67-68) described this well:
A similar argument, bu the way, holds true for molecular studies. Basically, it is easy to mistake mere statistical associations for a causal connection if one is not careful to properly partition one’s samples. Hamer and Copeland develop and amusing example of some hypothetical, badly misguided researchers searching for the “successful use of selected hand instruments” (SUSHI) gene (hypothesized to be associated with chopstick usage) between residents in Tokyo and Indianapolis. Hamer and Copeland note that while you would be almost certain to find a gene “associated with chopstick usage” if you did this, the design of such a hypothetical study would be badly flawed. What would be likely to happen here is that a genetic marker associated with the heterogeneity of the group involved (Japanese versus Caucasian) would be found, and the heterogeneity of the group involved would independently account for the differences in the trait; in this case, there is a cultural tendency for more people who grow up in Japan than people who grow up in Indianapolis to learn how to use chopsticks. That is, growing up in Japan is the causally important factor in using chopsticks; having a certain genetic marker is only associated with chopstick use in a statistical way, and only because those people who grow up in Japan are also more likely to have the marker than those who grew up in Indianapolis. The genetic marker is in no way causally related to chopstick use! That the marker ends up associated with chopstick use is therefore just an accident of design (Hamer and Copeland, 1998, 43; Bailey 1997 develops a similar example).
In this way, most—if not all—of the results of genome-wide association studies (GWASs) can be accounted for by population stratification. Hamer and Sirota (2000) is a warning to psychiatric geneticists to not be quick to ascribe function and causation to hits on certain genes from association studies (of which GWASs are).
Many studies, for example, Sniekers et al (2017), Savage et al (2018) purport to “account for” less than 10 percent of the variance in a trait, like “intelligence” (derived from non-construct valid IQ tests). Other GWA studies purport to show genes that affect testosterone production and that those who have a certain variant are more likely to have low testosterone (Ohlsson et al, 2011). Population stratification can have an effect here in these studies, too. GWASs; they give rise to spurious correlations that arise due to population structure—which is what GWASs are actually measuring, they are measuring social class, and not a “trait” (Richardson, 2017b; Richardson and Jones, 2019). Note that correcting for socioeconomic status (SES) fails, as the two are distinct (Richardson, 2002). (Note that GWASs lead to PGSs, which are, of course, flawed too.)
Such papers presume that correlations are causes and that interactions between genes and environment either don’t exist or are irrelevant (see Gottfredson, 2009 and my reply). Both of these claims are false. Correlations can, of course, lead to figuring out causes, but, like with the chopstick example above, attributing causation to things that are even “replicable” and “strongly significant” will still lead to false positives due to that same population stratification. Of course, GWAS and similar studies are attempting to account for the heriatbility estimates gleaned from twin, family, and adoption studies. Though, the assumptions used in these kinds of studies are shown to be false and, therefore, heritability estimates are highly exaggerated (and flawed) which lead to “looking for genes” that aren’t there (Charney, 2012; Joseph et al, 2016; Richardson, 2017a).
Richardson’s (2017b) argument is simple: (1) there is genetic stratification in human populations which will correlate with social class; (2) since there is genetic stratification in human populations which will correlate with social class, the genetic stratification will be associated with the “cognitive” variation; (3) if (1) and (2) then what GWA studies are finding are not “genetic differences” between groups in terms of “intelligence” (as shown by “IQ tests”), but population stratification between social classes. Population stratification still persists even in “homogeneous” populations (see references in Richardson and Jones, 2019), and so, the “corrections for” population stratification are anything but.
So what accounts for the small pittance of “variance explained” in GWASs and other similar association studies (Sniekers et al, 2017 “explained” less than 5 percent of variance in IQ)? Population stratification—specifically it is capturing genetic differences that occurred through migration. GWA studies use huge samples in order to find the genetic signals of the genes of small effect that underline the complex trait that is being studied. Take what Noble (2018) says:
As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (13, 21). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3).
Calude and Longo (2016; emphasis theirs) “prove that very large databases have to contain arbitrary correlations. These correlations appear only due to the size, not the nature, of data. They can be found in “randomly” generated, large enough databases, which — as we will prove — implies that most correlations are spurious.”
So why should we take association studies seriously when they fall prey to the problem of population stratification (measuring differences between social classes and other populations) along with the fact that big datasets lead to spurious correlations? I fail to think of a good reason why we should take these studies seriously. The chopsticks gene example perfectly illustrates the current problems we have with GWASs for complex traits: we are just seeing what is due to social—and other—stratification between populations and not any “genetic” differences in the trait that is being looked at.
I started this blog in June of 2015. I recall thinking of names for the blog, trying “politcallyincorrect.com” at first, but the domain was taken. I then decided on the name “notpoliticallycorrect.me”. Back then, of course, I was a hereditarian pushing the likes of Rushton, Kanazawa, Jensen, and others. I, to be honest, could never ever see myself disbelieving the “fact” that certain races were more or less intelligent than others, it was preposterous, I used to believe. IQ tests served as a completely scientific instrument which showed, however crudely, that certain races were more intelligent than others. I held these beliefs for around two years after the creation of this blog.
Back then, I used to go to Barnes n Noble and of course, go and browse the biology section, choose a book and drink coffee all day while reading. (I was drinking black coffee, of course.) I recall back in April of 2017 seeing this book DNA Is Not Destiny: The Remarkable, Completely Misunderstood Relationship between You and Your Genes on the shelf in the biology section. The baby blue cover of the book caught my eye—but I scoffed at the title. DNA most definitely was destiny, I thought. Without DNA we could not be who we were. I ended up buying the book and reading it. It took me about a week to finish it and by the end of the book, Heine had me questioning my beliefs.
In the book, Heine discusses IQ, heritability, genes, DNA testing to catch diseases, the MAOA gene, and so on. All in all, the book is against genetic essentialism which is rife in public—and even academic—thought.
After I read DNA Is Not Destiny, the next few weeks I went to Barnes n Noble I would keep seeing Ken Richardson’s Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. I recall scoffing even more at the title than I did Heine’s book. Nevertheless, I did not buy the book but I kept seeing it every time I went. When I finally bought the book, my worldview was then transformed. Before, I thought of IQ tests as being able to—however crude—measure intelligence differences between individuals and groups. The number that spits out was one’s “intelligence quotient”, and there was no way to raise it—but of course there were many ways to decrease it.
But Richardson’s book showed me that there were many biases implicit in the study of “intelligence”, both conscious and unconscious. The book showed me the many false assumptions that IQ-ists make when constructing tests. Perhaps most importantly, it showed me that IQ test scores were due to one’s social class—and that social class encompasses many other variables that affect test performance, and so stating that IQ tests are instruments to identify one’s social class due to the construction of the test seemed apt—especially due to the content on the test along with the fact that the tests were created by members of a narrow upper-class. This, to me, ensured that the test designers would get the result they wanted.
Not only did this book change my views on IQ, but I did a complete 180 on evolution, too (which Fodor and Pitattelli-Palmarini then solidified). Richardson in chapters 4 and 5 shows that genes don’t work the way most popularly think they do and that they are only used by and for the physiological system to carry out different processes. I don’t know which part of this book—the part on IQ or evolution—most radically changed my beliefs. But after reading Richardson, I did discover Susan Oyama, Denis Noble, Eva Jablonka and Marion Lamb, David Moore, David Shenk, Paul Griffiths, Karola Stotz, Jerry Fodor. and others who opposed the Neo-Darwinian Modern Synthesis.
Richardson’s most recent book then lead me to his other work—and that of other critics of IQ and the current neo-Darwinian Modern Synthesis—and from then on, I was what most would term an “IQ-denier”—since I disbelieve the claim that IQ tests test intelligence, and an “evolution denier”—since I deny the claim that natural selection is a mechanism. In any case, the radical changes in both of my what I would term major views I held were slow-burning, occurring over the course of a few months.
This can be evidenced by just reading the archives of this blog. For example, check the archives from May 2017 and read my article Height and IQ Genes. One can then read the article from April 2017 titled Reading Wrongthought Books in Public to see that over a two-month period that my views slowly began to shift to “IQ-denalism” and that of the Extended Evolutionary Synthesis (EES). Of course, in June of 2017, after defending Rushton’s r/K selection theory for years, I recanted on those views, too, due to Anderson’s (1991) rebuttal of Rushton’s theory. That three-month period from April-June was extremely pivotal in shaping the current views I have today.
After reading those two books, my views about IQ shifted from that of one who believed that nothing could ever shake his belief in them to one of the most outspoken critics of IQ in the “HBD” community. But the views on evolution that I now hold may be more radical than my current views on IQ. This is because Darwin himself—and the theory he formulated—is the object of attack, not a test.
The views I used to hold were staunch; I really believed that I would never recant my views, because I was privy to “The Truth ™” and everyone else was just a useful idiot who did not believe in the reality of intelligence differences which IQ tests showed. Though, my curiosity got the best of me and I ended up buying two books that radically shifted my thoughts on IQ and along with that evolution itself.
So why did I change my views on IQ and evolution? I changed my views due to conceptual and methodological problems on both points that Richardson and Heine pointed out to me. These view changes I underwent more than two years ago were pretty shocking to me. As I realized that my views were beginning to shift, I couldn’t believe it, since I recall saying to myself “I’ll never change my views.” the inadequacy of the replies to the critics was yet another reason for the shift.
It’s funny how things work out.
Five years away is always five years away. When one makes such a claim, they can always fall back on the “just wait five more years!” canard. Charles Murray is one who makes such claims. In an interview with the editor of Skeptic Magazine, Murray stated to Frank Miele:
I have confidence that in five years from now, and thereafter, this book will be seen as a major accomplishment.
This interview was in 1996 (after the release of the soft cover edition of The Bell Curve), and so “five years” would be 2001. But “predictions” such as this from HBDers (that the next big thing for their ideology, for example) is only X years away happens a lot. I’ve seen many HBDers make claims that only in 5 to 10 years the evidence for their position will come out. Such claims seem strangely religious to me. There is a reason for that. (See Conley and Domingue, 2016 for a molecular genetic refutation of The Bell Curve. While Murray’s prediction failed, 22 years after The Bell Curve’s publication, the claims of Murray and Herrnstein were refuted.)
Numerous people throughout history have made predictions regarding the date of Christ’s return. Some have used calculations to ascertain the date of Christ’s return, from the Bible. We can just take a look at the Wikipedia page for predictions and claims for the second coming of Christ where there are many (obviously failed) predictions of His return.
Take John Wesley’s claim that Revelations 12:14 referred to the day that Christ should come. Or one of Charles Taze Russell’s (the first president of the Watch Tower Society of Jehova’s Witnesses) claim that Jesus would return in 1874 and be ruling invisibly from heaven.
Russell’s beliefs began with Adventist teachings. While Russell, at first, did not take to the claim that Christ’s return could be predicted, that changed when he met Adventist author Nelson Barbour. The Adventists taught that the End Times began in 1799, Christ returned invisibly in 1874 with a physical return in 1878. (When this did not come to pass, many followers left Barbour and Russell states that Barbour did not get the event wrong, he just got the fate wrong.) So all Christians that died before 1874 would be resurrected, and Armageddon would begin in 1914. Since WWI began in 1914, Russell took that as evidence that his prediction was coming to pass. So Russell sold his clothing stores, worth millions of dollars today, and began writing and preaching about Christ’s imminent refuted. This doesn’t need to be said, but the predictions obviously failed.
So the date of 1914 for Armageddon (when Christ is supposed to return), was come to by Russell from studying the Bible and the great pyramids:
A key component to the calculation was derived from the book of Daniel, Chapter 4. The book refers to “seven times“. He interpreted each “time” as equal to 360 days, giving a total of 2,520 days. He further interpreted this as representing exactly 2,520 years, measured from the starting date of 607 BCE. This resulted in the year 1914-OCT being the target date for the Millennium.
Here is the prediction in Russell’s words “…we consider it an established truth that the final end of the kingdoms of this world, and the full establishment of the Kingdom of God, will be accomplished by the end of A.D. 1914” (1889). When 1914 came and went (sans the beginning of WWI which he took to be a sign of the, End Times), Russell changed his view.
Now, we can liken the Russell situation to Murray. Murray claimed that in 5 years after his book’s publication, that the “book would be seen as a major accomplishment.” Murray also made a similar claim back in 2016. Someone wrote to evolutionary biologist Joseph Graves about a talk Murray gave; he was offered an opportunity to debate Graves about his claims. Graves stated (my emphasis):
After his talk I offered him an opportunity to debate me on his claims at/in any venue of his choosing. He refused again, stating he would agree after another five years. The five years are in the hope of the appearance of better genomic studies to buttress his claims. In my talk I pointed out the utter weakness of the current genomic studies of intelligence and any attempt to associate racial differences in measured intelligence to genomic variants.
(Do note that this was back in April of 2016, about one year before I changed my hereditarian views to that of DST. I emailed Murray about this, he responded to me, and gave me permission to post his reply which you can read at the above link.)
Emil Kirkegaard stated on Twitter:
Do you wanna bet that future genomics studies will vindicate us? Ashkenazim intelligence is higher for mostly genetic reasons. Probably someone will publish mixed-ethnic GWAS for EA/IQ within a few years
Notice, though “within a few years” is vague; though I would take that to be, as Kirkegaard states next, three years. Kirkegaard was much more specific for PGS (polygenic scores) and Ashkenazi Jews, stating that “causal variant polygenic scores will show alignment with phenotypic gaps for IQ eg in 3 years time.” I’ll remember this; January 6th, 2022. (Though it was just an “example given”, this is a good example of a prediction from an HBDer.) Nevermind the problems with PGS/GWA studies (Richardson, 2017; Janssens and Joyner, 2019; Richardson and Jones, 2019).
I can see a prediction being made, it not coming to pass, and, just like Russel, one stating “No!! X, Y, and Z happened so that invalidated the prediction! The new one is X time away!” Being vague about timetables about as-of-yet-to-occur events it dishonest; stick to the claim, and if it does not occur….stop holding the view, just as Russel did. However, people like Murray won’t change their views; they’re too entrenched in this. Most may know that I over two years ago I changed my views on hereditarianism (which “is the doctrine or school of thought that heredity plays a significant role in determining human nature and character traits, such as intelligence and personality“) due to two books: DNA Is Not Destiny: The Remarkable, Completely Misunderstood Relationship between You and Your Genes and Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. But I may just be a special case here.
Genes, Brains, and Human Potential then led me to the work of Jablonka and Lamb, Denis Noble, David Moore, Robert Lickliter, and others—the developmental systems theorists. DST is completely at-ends with the main “field” of “HBD”: behavioral genetics. See Griffiths and Tabery (2013) for why teasing apart genes and environment—nature and nurture—is problematic.
In any case, five years away is always five years away, especially with HBDers. That magic evidence is always “right around the corner”, despite the fact that none ever comes. I know that some HBDers will probably clamor that I’m wrong and that Murray or another “HBDer” has made a successful prediction and not immediately change the date of said prediction. But, just like Charles Taze Russell, when the prediction does not come to pass, just make something up about how and why the prediction didn’t come to pass and everything should be fine.
I think Charles Murray should change his name to Charles Taze Russel, since he pushed back the date of the prediction so many times. Though, to Russel’s credit, he did eventually recant on his views. I would find it hard to believe that Murray would; he’s too deep in this game and his career writing books and being an AEI pundit is on the line.
So I strongly doubt that Murray would ever come outright and say “I was wrong.” Too much money is on the line for him. (Note that Murray has a new book releasing in January titled Human Diversity: Gender, Race, Class, and Genes and you know that I will give a scathing review of it, since I already know Murray’s MO.) It’s ironic to me: Most HBDers are pretty religious in their convictions and can and will explain away data that doesn’t line up with their beliefs, just like a theist.
The cold winter theory (CWT) is a theory that purports to explain why those whose ancestors evolved in colder climes are more “intelligent” than those whose ancestors evolved in warmer climes. Popularized by Rushton (1997), Lynn (2006), and Kanazawa (2012), the theory—supposedly—accounts for the “haves” and the “have not” in regard to intelligence. However, the theory is a just-so story, that is, it explains what it purports to explain without generating previously unknown facts not used in the construction of the theory. PumpkinPerson is irritated by people who do not believe the just-so story of the CWT writing (citing the same old “challenges” as Lynn which were dispatched by McGreal):
The cold winter theory is extremely important to HBD. In fact I don’t even understand how one can believe in racially genetic differences in IQ without also believing that cold winters select for higher intelligence because of the survival challenges of keeping warm, building shelter, and hunting large game.
The CWT is “extremely important to HBD“, as PP claims, since there needs to be an evolutionary basis for population differences in “intelligence” (IQ). Without the just-so story, the claim that racial differences in “intelligence” are “genetically” based crumbles.
Well, here is the biggest “challenge” (all other refutations of it aside) to the CWT. Notions of which population are or are not “intelligent” change with the times. The best example is what the Greeks—specifically Aristotle—wrote about the intelligence of those who lived in the north. Maurizio Meloni, in his 2019 book Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics captures this point (pg 41-42; emphasis his):
Aristotle’s Politics is a compendium of all these ideas [Orientals being seen as “softer, more delicate and unwarlike” along with the structure of militaries], with people living in temperate (mediocriter) places presented as the most capable of producing the best political systems:
“The nations inhabiting the cold places and those of Europe are full of spirit but somewhat deficient in intelligence and skill, so that they continue comparatively free, but lacking in political organization and the capacity to rule their neighbors. The peoples of Asia on the other hand are intelligent and skillful in temperament, but lack spirit, so that they are in continuous subjection and slavery. But the Greek race participates in both characters, just as it occupies the middle position geographically, for it is both spirited and intelligent; hence it continues to be free and to have very good political institutions, and to be capable of ruling all mankind if it attains constitutional unity.” (Pol. 1327b23-33, my italics)
Views of direct environmental influence and the porosity of bodies to these effects also entered the military machines of ancient empires, like that of the Romans. Offices such as Vegetius (De re militari, I/2) suggested avoiding recruiting troops from cold climates as they had too much blood and, hence, inadequate intelligence. Instead, he argued, troops from temperate climates be recruited, as they possess the right amount of blood, ensuring their fitness for camp discipline (Irby, 2016). Delicate and effemenizing land was also to be abandoned as soon as possible, according Manilius and Caesar (ibid). Probably the most famous geopolitical dictum of antiquity reflects exactly this plastic power of places: “soft lands breed soft men”, according to the claim that Herodotus attributed to Cyrus.
Isn’t that weird, how things change? Quite obviously, which population is or is not “intelligent” is based on the time and place of the observation. Those in northern Europe, who are purported to be more intelligent than those who live in temperate, hotter climes—back in antiquity—were seen to be less intelligent in comparison to those who lived in more temperate, hotter climes. Imagine stating what Aristotle said thousands of years ago in the present day—those who push the CWT just-so story would look at you like you’re crazy because, supposedly, those who live in and evolved in colder climes had to plan ahead and faced a tougher environment in comparison to those who lived closer to the equator.
Imagine we could transport Aristotle to the present day. What would he say about our perspectives on which population is or is not intelligent? Surely he would think it ridiculous that the Greeks today are less “intelligent” than those from northern Europe. But that only speaks to how things change and how people’s perspectives on things change with the times and who is or is not a dominant group. Now imagine that we can transport someone (preferably an “IQ” researcher) to antiquity when the Greeks were at the height of their power. They would then create a just-so story to justify their observations about the intelligence of populations based on their evolutionary history.
Anatoly Karlin cites Galton, who claims that ancient Greek IQ was 125, while Karlin himself claims IQ 90. I cite Karlin’s article not to contest his “IQ estimates”—nor Galton’s—I cite it to show the disparate “estimates” of the intelligence of the ancient Greeks. Because, according to the Greeks, they occupied the middle position geographically, and so they were both spirited and intelligent compared to Asians and northern Europeans.
This is similar to Wicherts, Boorsboom, and Dolan (2010) who responded to Rushton, Lynn, and Templer. They state that the socio-cultural achievements of Mesopotamia and Egypt stand in “stark contrast to the current low level of national IQ of peoples of Iraq and Egypt and that these ancient achievements appear to contradict evolutionary accounts of differences in national IQ.“ One can make a similar observation about the Maya. Their cultural achievements stand in stark contrast to their “evolutionary history” in warm climes. The Maya were geographically isolated from other populations and they still created a writing system (independently) along with other cultural achievements that show that “national IQs” are irrelevant to what the population achieved. I’m sure an IQ-ist can create a just-so story to explain this away, but that’s not the point.
Going back to what Karlin and Galton stated about Greek IQ, their IQ is irrelevant to their achievements. Whether or not their IQ was 120-125 or 90 is irrelevant to what they achieved. To the Mesopotamians and Egyptians, they were more intelligent than those from northern climes. They would, obviously, think that based on their achievements and the lack of achievements in the north. The achievements of peoples in antiquity would paint a whole different picture in regard to an evolutionary theory of human intelligence—and its distribution in human populations.
So which just-so story (ad hoc hypothesis) should we accept? Or should we just accept that which population is or is not “intelligent” and capable of constructing militaries is contingent based on the time and the place of the observation? Looking at “national IQs” of peoples in antiquity would show a huge difference in comparison to what we observe today about the “national IQs” (supposedly ‘intelligence’) of populations around the world. In antiquity, those who lived in temperate and even hotter climes had greater achievements than others. Greeks and Romans argued that peoples from northern climes should not be enlisted in the military due to where they were from.
These observations from the Greeks and Romans about who and who not to enlist in the military, along with their thoughts on Northern Europeans prove that perspectives on which population is or is not “intelligent” is contingent based on the time and place. This is why “national IQs” should not be accepted, not even accounting for the problems with the data (Richardson, 2004; also see Morse, 2008; also see The Ethics of Development: An Introduction by Ingram and Derdak, 2018). Seeing the development of countries/populations in antiquity would lead to a whole different evolutionary theory of the intelligence of populations, proving the contingency of the observations.
In 2012, biologist Hippokratis Kiaris published a book titled Genes, Polymorphisms, and the Making of Societies: How Genetic Behavioral Traits Influence Human Cultures. His main point is that “the presence of different genes in the corresponding people has actually dictated the acquisition of these distinct cultural and historical lines, and that an alternative outcome might be unlikely” (Kiaris, 2012: 9). This is a book that I have not seen discussed in any HBD blog, and based on the premise of the book (how it purports to explain behavioral/societal outcomes between Eastern and Western society) you would think it would be. The book is short, and he speaks with a lot of determinist language. (It’s worth noting he does not discuss IQ at all.)
In the book, he discusses how genes “affect” and “dictate” behavior which then affects “collective decisions and actions” while also stating that it is “conceivable” that history, and what affects human decision-making and reactions, are also “affected by the genetic identity of the people involved” (Kiaris, 2012: 11). Kiaris argues that genetic differences between Easterners and Westerners are driven by “specific environmental conditions that apparently drove the selection of specific alleles in certain populations, which in turn developed particular cultural attitudes and norms” (Kiaris, 2012: 91).
Kiaris attempts to explain the societal differences between the peoples who adopted Platonic thought and those who adopted Confucian thought. He argues that differences between Eastern and Western societies “are not random and stochastic” but are “dictated—or if this is too strong an argument, they are influenced considerably—by the genes that these people carry.” So, Kiaris says, “what we view as a choice is rather the complex and collective outcome of the influence of people’s specific genes combined with the effects of their specific environment … [which] makes the probability for rendering a certain choice distinct between different populations” (Kiaris, 2012: 50).
The first thing that Kiaris discusses (behavior wise) is DRD4. This allele has been associated with miles migrated from Africa (with a correlation of .85) along with novelty-seeking and hyperactivity (which may cause the association found with DRD4 frequency and miles migrated from Africa (Chen et al, 1999). Kiaris notes, of course, that the DRD4 alleles are unevenly distributed across the globe, with people who have migrated further from Africa having a higher frequency of these alleles. Europeans were more likely to have the “novelty-seeking” DRD7 compared to Asian populations (Chang et al, 1996). But, Kiaris (2012: 68) wisely writes (emphasis mine):
Whether these differences [in DRD alleles] represent the collective and cumulative result of selective pressure or they are due to founder effects related to the genetic composition of the early populations that inhabited the corresponding areas remains elusive and is actually impossible to prove or disprove with certainty.
Kiaris then discusses differences between Eastern and Western societies and how we might understand these differences between societies as regards novelty-seeking and the DRD4-7 distribution across the globe. Westerners are more individualistic and this concept of individuality is actually a cornerstone of Western civilization. The “increased excitability and attraction to extravagance” of Westerners, according to Kiaris, is linked to this novelty-seeking behavior which is also related to individualism “and the tendency to constantly seek for means to obtain satisfaction” (Kiaris, 2012: 68). We know that Westerners do not shy away from exploration; after all, the West discovered the East and not vice versa.
Easterners, on the other hand, are more passive and have “an attitude that reflects a certain degree of stoicism and makes life within larger—and likely collectivistic—groups of people more convenient“. Easterners, compared to Westerners, take things “the way they are” which “probably reflects their belief that there is not much one can or should do to change them. This is probably the reason that these people appear rigid against life and loyal, a fact that is also reflected historically in their relatively high political stability” (Kiaris, 2012: 68-69).
Kiaris describes DRD4 as a “prototype Westerner’s gene” (pg 83), stating that the 7R allele of this gene is found more frequently in Europeans compares to Asians. The gene has been associated with increased novelty-seeking, exploratory activity and human migrations, along with liberal ideology. These, of course, are cornerstones of Western civilization and thought, and so, Kiaris argues that the higher frequency of this allele in Europeans—in part—explains certain societal differences between the East and West. Kiaris (2012: 83) then makes a bold claim:
All these features [novelty-seeking, exploratory activity and migration] indeed tend to characterize Westerners and the culutral norms they developed, posing the intriguing possibility that DRD4 can actually represent a single gene that can “predispose” for what we understand as the stereotypic Western-type behavior. Thus, we could imagine that an individual beating the 7-repeat allele functions more efficiently in Western society while the one without this allele would probably be better suited to a society with Eastern-like structure. Alternatively, we could propose that a society with more individuals bearing the 7-repeat allele is more likely to have followed historical lines and choices more typical of a Western society, while a population with a lower number (or deficient as it is the actual case with Easterners) of individuals with the 7-repeat allele would more likely attend to the collective historical outcome of Eaasterners.
Kiaris (2012: 84) is, importantly, skeptical that having a high number of “novelty-seekers” and “explorers” would lead to higher scientific achievement. This is because “attempts to extrapolate from individual characteristics to those of a group of people and societies possess certain dangers and conceptual limitations.”
Kiaris (2012: 86) says that “collectivistic behavior … is related to the activity of serotonin.” He then goes on to cite a few instances of other polymorphisms which are associated with collective behavior as well. Goldman et al (2010) show ethnic differences in the l and s alleles (from Kiaris, 2012: 86):
It should also be noted that populations (Easterners) that had a higher frequency of the s allele had a lower prevalence of depression than Westerners. So Western societies are more likely to “suffer more frequently from various manifestations of depression and general mood disorders than those of Eastern cultures (Chiao & Blizinsky, 2010)” (Kiaris, 2012: 89).
As can be seen from the table above, Westerners are more likely to have the l allele than Easterners, which should subsequently predict higher levels of happiness in Western compared to Eastern populations. However, “happiness” is, in many ways, subjective; so how would one find an objective way to measure “happiness” cross-culturally? However, Kiaris (2012: 94) writes: “Intuitively speaking, though, I have to admit that I would rather expect Asians to be happier, in general, than Westerners. I cannot support this by specific arguments, but I think the reason for that is related to the individualistic approach of life that the people possess in Western societies: By operating under individualistic norms, it is unavoidably stressful, a condition that operates at the expense of the perception of individuals’ happiness.”
Kiaris discusses catechol-O-methyltransferase (COMT), which is an enzyme responsible for the inactivation of catecholamines. Catecholamines are the hormones dopamine, adrenaline, and noradrenaline. These hormones regulate the “fight or flight” function (Goldstein, 2011). So since catecholamines play a regulatory role in the “fight or flight” mechanism, increased COMT activity results in lower dopamine levels, which is then associated with better performance.
“Warriors” and “worriers” are intrinsically linked to the “fight or flight” mechanism. A “warrior” is someone who performs better under stress, achieves maximal performance despite threat and pain, and is more likely to act efficiently in a threatening environment. A “worrier” is “someone that has an advantage in memory and attention tasks, is more exploratory and efficient in complex environments, but who exhibits worse performance under stressful conditions (Stein et al., 2006)” (Kiaris, 2012: 102).
Kiaris (2012: 107) states that “at the level of society, it can be argued that the specific Met-bearing COMT allele contributes to the buildup of Western individualism. Opposed to this, Easterners’ increased frequency of the Val-bearing “altruistic” allele fits quite well with the construction of a collectivistic society: You have to be an altruist at some degree in order to understand the benefits of collectivism. By being a pure individualist, you only understand “good” as defined and reflected by your sole existence.”
So, Kiaris’ whole point is thus: there are differences in polymorphic genes between Easterners and Westerners (and are unevenly distributed) and that differences in these polymorphisms (DRD4, HTT, MAOA, and COMT) explain behavioral differences between behaviors in Eastern and Western societies. So the genetic polymorphisms associated with “Western behavior” (DRD4) are associated with increased novelty-seeking, tendency for financial risk-taking, distance of OoA migration, and liberal ideology. Numerous different MAOA and 5-HTT polymorphisms are associated with collectivism (e.g., Way and Lieberman, 2006 for MAOA and collectivism). The polymorphism in COMT more likely to be found in Westerners predisposes for “worrier’s behavior”. Furthermore, certain polymorphisms of the CHRNB3 gene are more common in all of the populations that migrated out of Africa, which predisposed for leaders—and not follower—behavior.
|Novelty seeking||DRD4||7-repeat novelty seeking allele more common in the West|
|Migration||DRD4||7-repeat allele is associated with distance from Africa migration|
|Nomads/settlers||DRD4||7-repeat allele is associated with nomadic life|
|Political ideology||DRD4||7-repeat allele is more common in liberals|
|Financial risk taking||DRD4||7-repeat allele is more common in risk takers|
|Individualism/Collectivism||HTT||s allele (collectivistic) of 5-HTT is more common in the East|
|Happiness||HTT||l allele has higher prevalence in individuals happy with their life|
|Individualism/Collectivism||MAOA||3-repeat allele (collectivistic) more common in the East)|
|Warrior/Worrier||COMT||A-allele (worrier) more common in the West|
|Altruism||COMT||G-allele (warrior) associated with altruism|
|Leader/Follower||CHRBN3||A-allele (leader) more common in populations Out-of-Africa|
The table above is from Kiaris (2012: 117) who lays out the genes/polymorphisms discussed in his book—what supposedly shows how and why Eastern and Western societies are so different.
Kiaris (2012: 141) then makes a bold claim: “Since we know now that at least a fraction (and likely more than that) of our behavior is due to our genes“, actually “we” don’t “know” this “now”.
The takeaways from the book are: (1) populations differ genetically; (2) since populations differ genetically, then genetic differences correlated with behavior should show frequency differences between populations; (3) since these populations show both behavioral/societal differences and they also differ in genetic polymorphisms which are then associated with that behavior, then those polymorphisms are, in part, a cause of that society and the behavior found in it; (4) therefore, differences in Eastern and Western societies are explained by (some) of these polymorphisms discussed.
Now for a simple rebuttal of the book:
“B iff G” (behavior B is possible if and only if a specific genotype G is instantiated) or “if G, then necessarily B” (genotype G is a sufficient cause for behavior B). Both claims are false; genes are neither a sufficient or necessary cause for any behavior. Genes are, of course, a necessary pre-condition for behavior, but they are not needed for a specific behavior to be instantiated; genes can be said to be difference makers (Sterelny and Kitcher, 1988) (but see Godfrey-Smith and Lewontin, 1993 for a response). These claims cannot be substantiated; therefore, the claims that “if G, then necessarily B” and “B iff G” are false, it cannot be shown that genes are difference makers in regard to behavior, nor can it be shown that particular genes or whatnot.
I’m surprised that I have not come across a book like this sooner; you would expect that there would be a lot more written on this. This book is short, it discusses some good resources, but the conclusions that Kiaris draws, in my opinion, will not come to pass because genes are not neccesary nor sufficient cause of any type of behavior, nor can it be shown that genes are causes of any behavior B. Behavioral differences between Eastern and Western societies, logically, cannot come down to differences in genes, since they are neither necessary nor sufficient causes of behavior (genes are neccessary pre-conditions for behavior, since without genes there is no organism, but genes cannot explain behavior).
Kiaris attempts to show how and why Eastern and Western societies became so different, how and why Western societies are dominated by “Aristotle’s reason and logic”, while Eastern lines of thought “has been dominated by Confucious’s harmony, collectivism, and context dependency” (Kiaris, 2012: 9). While the book is well-written and researched (he talks about nothing new if you’re familiar with the literature), Kiaris fails to prove his ultimate point: that differences in genetic polymorphisms between individuals in different societies explain how and why the societies in question are so different. Though, it is not logically possible for genes to be a necessary nor sufficient cause for any behavior. Kiaris talks like a determinist, since he says that “the presence of different genes in the corresponding people has actually dictated the acquisition of these distinct cultural and historical lines, and that an alternative outcome might be unlikely” (Kiaris, 2012: 9), though that is just wishful thinking: if we were able to start history over again, things would occur differently, “the presence of different genes in the corresponding people” be dammed, since genes do not cause behavior.