Home » HBD
Category Archives: HBD
In its essence the traditional notion of general intelligence may be a secularised version of the Puritan idea of the soul. … perhaps Galtonian intelligence had its roots in a far older kind of religious thinking. (John White, Personal space: The religious origins of intelligence testing)
In chapter 1 of Alas, Poor Darwin: Arguments Against Evolutionary Psychology, Dorothy Nelkin identifies the link between the founder of sociobiology E.O. Wilson’s religious beliefs and the epiphany he described when he learned of evolution. A Christian author then used Sociobiology to explain and understand the origins of our own sinfulness (Williams, 2000). But there is another hereditarian-type research program that has these kinds of assumptions baked-in—IQ.
Philosopher of education John White has looked into the origins of IQ testing and the Puritan religion. The main link between Puritanism and IQ was that of predestination. The first IQ-ists conceptualized IQ—‘g’ or general intelligence—to be innate, predetermined and hereditary. The predetermination line between both IQ and Puritanism is easy to see: To the Puritans, it was predestined whether or not one went to Hell before they even existed as human beings whereas to the IQ-ists, IQ was predestined, due to genes.
John White (2006: 39) in Intelligence, Destiny, and Education notes the parallel between “salvation and success, damnation and failure”:
Can we usefully compare the saved/damned dichotomy with the perceived contribtion of intelligence or the lack of it to success and failure in life, as conventionally understood? One thing telling against this is that intelligence testers claim to identify via IQ scores a continuous gamut of ability from lowest to highest. On the other hand, most of the pioneers in the field were … especially interested in the far ends of this range — in Galton’s phrase ‘the extreme classes, the best and the worst.’ On the other hand there were the ‘gifted’, ‘the eminent’, ‘those who have honourably succeeded in life’, presumably … the most valuable portion of our human stock. On the other, the ‘feeble-minded’, the ‘cretins’, the ‘refuse’ those seeking to avoid ‘the monotony of daily labor’, democracy’s ballast, not always useless but always a potential liability’.
A Puritan-type parallel can be drawn here—the ‘cretins and ‘feeble-minded’ are ‘the damned’ whereas ‘the extreme classes, the best and worst’ were ‘the saved.’ This kind of parallel can still be seen in modern conceptualizations of the debate and current GWASs—certain people have a certain surfeit of genes that influence intellectual attainment. Contrast with the Puritan “Certain people are chosen before they exist to either be damned or saved.” Certain people are chosen, by random mix-ups of genes during conception, to either be successful or not, and this is predetermined by the genes. So, genetic determinism when speaking of IQ is, in a way, just like Puritan predestination—according to Galton, Burt and other IQ-ists in the 1910s-1920s (ever since Goddard brought back the Binet-Simon Scales from France in 1910).
Some Puritans banned the poor from their communities seeing them as “disruptors to Puritan communities.” Stone (2018: 3-4) in An Invitation to Satan: Puritan Culture and the Salem Witch Trials writes:
The range of Puritan belief in salvation usually extended merely to members of their own communities and other Puritans. They viewed outsiders as suspicious, and people who held different beliefs, creeds, or did things differently were considered dangerous or evil. Because Puritans believed the community shared the consequences of right and wrong, often community actions were taken to atone for the misdeed. As such, they did not hesitate to punish or assault people who they deemed to be transgressors against them and against God’s will. The people who found themselves punished were the poor, and women who stood low on the social ladder. These punishments would range from beatings to public humiliation. Certain crimes, however, were viewed as far worse than others and were considered capital crimes, punishable by death.
Could the Puritan treatment of the poor be due to their beliefs of predestination? Puritan John Winthrop stated in his book A Model of Christian Charity that “some must be rich, some poor, some high and eminent in power and dignity, others mean and in subjection.” This, too, is still around today: IQ sets “upper limits” on one’s “ability ceiling” to achieve X. The poor are those who do not have the ‘right genes’. This is, also, a reason why IQ tests were first introduced in America—to turn away the poor (Gould, 1996; Dolmage, 2018). That one’s ability is predetermined in their genes—that each person has their own ‘ceiling of ability’ that they can reach that is then constrained by their genes is just like the Puritan predestination thesis. But, it is unverifiable and unfalsifiable, so it is not a scientific theory.
To White (2006), the claim that we have this ‘innate capacity’ that is ‘general’ this ‘intelligence’ is wanting. He takes this further, though. In discussing Galton’s and Burt’s claim that there are ‘ability ceilings’—and in discussing a letter he wrote to Burn—White (2006: 16) imagines that we give instruction to all of the twin pairs and that, their scores increase by 15 points. This, then, would have a large effect on the correlation “So it must be an assumption made by the theorist — i.e. Burt — in claiming a correlation of 0.87, that coaching could not successfully improve IQ scores. Burt replied ‘I doubt whether, had we returned a second time, the coaching would have affected our correlations” (White, 2006: 16). Burt seems to be implying that a “ceiling of ability” exists, which he got from his mentor, Galton. White continues:
It would appear that Galton nor Burt have any evidence for their key claim [that ability ceilings exist]. The proposition that, for all of us, there are individually differing ceilings of ability seems to be an assumption behind their position, rather than a conclusion based on telling grounds.
I have discussed elsewhere (White, 1974; 2002a: ch. 5) what could count as evidence for this proposition, and concluded that it is neither verifiable nor falsifiable. The mere fact that a child appears not able to get beyond, say, elementary algebra is not evidence of a ceiling. The failure of this or that variation in teaching approach fares no better, since it is always possible for a tracher to try some different approach to help the learner get over the hurdle. (With some children, so neurologically damaged that they seem incapable of language, it may seem that the point where options run out for the teacher is easier to establish than it is for other children. But the proposition in question is supposed to applu to all of us: we are all said to have our own mental ceiling; and for non-brain-damaged people the existence of a ceiling sems impossible to demonstrate.) It is not falsifiable, since for even the cleverest person in the world, for whom no ceiling has been discovered, it is always possible that it exists somewhere. As an untestable — unverifiable and unfalsifiable — proposition, the claim that we each have a mental ceiling has, if we follow Karl Popper (1963: ch. 1), no role in science. It is like the proposition that God exists or that all historical events are predetermined, both of which are equally untestable. As such, it may play a foundational role, as these two propositions have played, in some ideological belief system of belief, but has no place in empirical science. (White, 2006: 16)
Burt believed that we should use IQ tests to shoe-horn people into what they would be ‘best for’ on the basis of IQ. Indeed, this is one of the main reasons why Binet constructed what would then become the modern IQ test. Binet, influenced by Galton’s (1869) Hereditary Genius, believed that we could identify and help lower-‘ability’ children. Binet envisioned an ‘ideal city’ in which people were pushed to vocations that were based on their ‘IQs.’ Mensh and Mensh (1991: 23) quote Binet on the “universal applications” of his test:
Of what use is a measure of intelligence? Without doubt, one could conceive many possible applications of the process in dreaming of a future where the social sphere would be better organized than ours; where everyone would work according to his known apptitudes in such a way that non particle of psychic force should be lost for society. That would be the ideal city.
So, it seems, Binet wanted to use his test as an early aptitude-type test (like the ones we did in grammar school which ‘showed us’ which vocations we would be ‘good at’ based on a questionnaire). Having people in Binet’s ‘ideal city’ work based on their ‘known aptitudes’ would increase, not decrease, inequality so Binet’s envisioned city is exactly the same as today’s world. Mensh and Mensh (1991: 24) continue:
When Binet asserted that everyone would work to “known” aptitudes, he was saying that the individuals comprising a particular group would work according to the aptitudes that group was “known” to have. When he suggested, for example, that children of lower socioeconomic status are perfectly suited for manual labor, he was simply expressing what elite groups “know,” that is, that they themselves have mental aptitudes, and others have manual ones. It was this elitist belief, this universal rationale for the social status quo, that would be upheld by the universal testing Binet proposed.
White (2006: 42) writes:
Children born with low IQs have been held to have no hope of a professional, well-paid job. If they are capable of joining the workforce at all, they must find their niche as the unskilled workers.
Thus, the similarities between IQ-ist and religious (Puritan) belief comes clear. The parallels between the Puritan concern for salvation and the IQ-ist belief that one’s ‘innate intelligence’ dictated whether or not they would succeed or fail in life (based on their genes); both had thoughts of those lower on the social ladder, their work ethic and morals associated with the reprobate on the one hand and the low IQ people on the other; both groups believed that the family is the ‘mechanism’ by which individuals are ‘saved’ or ‘damned’—presuming salvation is transmitted based one’s family for the Puritans and for the IQ-ists that those with ‘high intelligence’ have children with the same; they both believed that their favored group should be at the top with the best jobs, and best education, while those lower on the social ladder should also get what they accordingly deserve. Galton, Binet, Goddard, Terman, Yerkes, Burt, and others believed that one was endowed with ‘innate general intelligence’ due to genes, according to the current-day IQ-ists who take the same concept.
White drew his parallel between IQ and Puritanism without being aware that one of the first anti-IQ-ists—and American Journalist named Walter Lippman—who also been made in the mid-1920s. (See Mensh and Mensh, 1991 for a discussion of Lippman’s grievances with the IQ-ists). Such a parralel between Puritanism and Galton’s concept of ‘intelligence’ and that of the IQ-ists today. White (2005: 440) notes “that virtually all the major players in the story had Puritan connexions may prove, after all, to be no more than coincidence.” Though, the evidence that White has marshaled in favor of the claim is interesting, as noted many parallels exist. It would be some huge coincidence for there to be all of these parallels without them being causal (from Puritanistic beliefs to hereditarian IQ dogma).
This is similar to what Oyama (1985: 53) notes:
Just as traditonal though placed biological forms in the mind of God, so modern thought finds many ways of endowing the genes with ultimate formative power, a power bestowed by Nature over countless milennia.
But this parallel between Puritanism and hereditarianism doesn’t just go back to the early 20th century—it can still be seen today. The assumption that genes contain a type of ‘information’ before activated by the physiological system for its uses still pervades our thought today, even though many others have been at the forefront to change that kind of thinking (Oyama, 1985, 2000; Jablonka and Lamb, 1995, 2005; Moore, 2002, 2016; Noble, 2006, 2011, 2016).
The links between hereditarianism and religion are compelling; eugenic and Puritan beliefs are similar (Durst, 2017). IQ tests have now been identified as having their origins in eugenic beliefs, along with Puritan-like beliefs have being saved/damned based on something that is predetermined, out of your control just like your genetics. The conception of ‘ability ceilings’—using IQ tests—is not verifiable nor is it falsifiable. Hereditarians believe in ‘ability ceilings’ and claim that genes contain a kind of “blueprint” (which is still held today) which predestines one toward certain dispositions/behaviors/actions. Early IQ-ists believed that one is destined for certain types of jobs based on what is ‘known’ about their group. When Binet wrote that, the gene was yet to be conceptualized, but it has stayed with us ever since.
So not only did the concept of “IQ” emerge due to the ‘need’ to ‘identify’ individuals for their certain ‘aptitudes’ that they would be well-suited for in, for instance, Binet’s ideal city, it also arose from eugenic beliefs and religious (Puritan) thinking. This may be why IQ-ists seem so hysterical—so religious—when talking about IQ and the ‘predictions’ it ‘makes’ (see Nash, 1990).
The history of standardized testing—including IQ testing—has a contentious history. What causes score distributions between groups of people? I stated at least four reasons why there is a test gap:
(1) Differences in genes cause differences in IQ scores;
(2) Differences in environment cause differences in IQ scores;
(3) A combination of genes and environment cause differences in IQ scores; and
(4) Differences in IQ scores are built into the test based on the test constructors’ prior biases.
I hold to (4) since, as I have noted, the hereditarian-environmentalist debate is frivolous. There is, as I have been saying for years now, no agreed-upon definition of ‘intelligence’, since there are such disparate answers from the ‘experts’ (Lanz, 2000; Richardson, 2002).
For the lack of such a definition only reflects the fact that there is no worked-out theory of intelligence. Having a successful definition of intelligence without a corresponding theory would be like having a building without foundations. This lack of theory is also responsible for the lack of some principled regimentation of the very many uses the word ‘intelligence’ and its cognates are put to. Tao many questions concerning intelligence are still open, too many answers controversial. Consider a few examples of rather basic questions: Does ‘intelligence’ name some entity which underlies and explains certain classes of performances1, or is the word ‘intelligence’ only sort of a shorthand-description for ‘being good at a couple of tasks or tests’ (typically those used in IQ tests)? In other words: Is ‘intelligence’ primarily a descriptive or also an explanatorily useful term? Is there really something like intelligence or are there only different individual abilities (compare Deese 1993)? Or should we turn our backs on the noun ‘intelligence’ and focus on the adverb ‘intelligently’, used to characterize certain classes of behaviors? (Lanz, 2000: 20)
Nash (1990: 133-4) writes:
Always since there are just a series of tasks of one sort or another on which performance can be ranked and correlated with other performances. Some performances are defined as ‘cognitive performances’ and other performances as ‘attainment performances’ on essentially arbitrary, common sense grounds. Then, since ‘cognitive performances’ require ‘ability’ they are said to measure that ‘ability’. And, obviously, the more ‘cognitive ability’ an individual posesses the more that individual can acheive. These procedures can provide no evidence that IQ is or can be measured, and it is rather besides the point to look for any, since that IQ is a metric property is a fundamental assumption of IQ theory. It is imposible that any ‘evidence’ could be produced by such procedures. A standardised test score (whether on tests designated as IQ or attainment tests) obtained by an individual indicates the relative standing of that individual. A score lies within the top ten perent or bottom half, or whatever, of those gained by the standardisation group. None of this demonstrates measurement of any property. People may be rank ordered by their telephone numbers but that would not indicate measurement of anything. IQ theory must demonstrate not that it has ranked people according to some performance (that requires no demonstration) but that they are ranked according to some real property revealed by that performance. If the test is an IQ test the property is IQ — by definition — and there can in consequence be no evidence dependent on measurement procedures for hypothesising its existence. The question is one of theory and meaning rather than one of technique. It is impossible to provide a satisfactory, that is non-circular, definition of the supposed ‘general cognitive ability’ IQ tests attempt to measure and without that definition IQ theory fails to meet the minimal conditions of measurement.
These is similar to Mary Midgley’s critique of ‘intelligence’ in her last book before her death What Is Philosophy For? (Midgley, 2018). The ‘definitions’ of ‘intelligence’ and, along with it, its ‘measurement’ have never been satisfactory. Haier (2016: 24) refers to Gottfredson’s ‘definition’ of ‘intelligence, stating that ‘intelligence’ is a ‘general mental ability.’ But if that is the case, that it is a ‘general mental ability’ (g) then ‘intelligence’ does not exist because ‘g’ does not exist as a property in the brain. Lanz’s (2000) critique is also like Howe’s (1988; 1997) that ‘intelligence’ is a descriptive, not explanatory, term.
Now that the concept of ‘intelligence’ has been covered, let’s turn to race and test bias.
Test items are biased when they have different psychological meanings across cultures (He and van de Vijver 2012: 7). If they have different meanings across cultures, then the tests will not reflect the same ‘ability’ between cultures. Being exposed to the knowledge—and correct usage of it—on a test is imperative for performance. For if one is not exposed to the content on the test, how are they expected to do well if they do not know the content? Indeed, there is much evidence that minority groups are not acculturated to the items on the test (Manly et al, 1997; Ryan et al, 2005; Boone et al, 2007). This is what IQ tests measure: acculturation to the the tests’ constructors, school cirriculum and school teachers—aspects of white, middle-class culture (Richardson, 1998). Ryan et al (2005) found that reading and and educational level, not race or ethnicity, was related to worse performance on psychological tests.
Serpell et al (2006) took 149 white and black fourth-graders and randomly assigned them to ethnically homogeneous groups of three, working on a motion task on a computer. Both blacks and whites learned equally well, but the transfer outcomes were better for blacks than for whites.
Helms (1992) claims that standardized tests are “Eurocentric”, which is “a perceptual set in which European and/ or European American values, customs, traditions and characteristics are used as exclusive standards against which people and events in the world are evaluated and perceived.” In her conclusion, she stated that “Acculturation
and assimilation to White Euro-American culture should enhance one’s performance on currently existing cognitive ability tests” (Helms, 1992: 1098). There just so happens to be evidence for this (along with the the studies referenced above).
Fagan and Holland (2002) showed that when exposure to different kinds of information was required, whites did better than blacks but when it was based on generally available knowledge, there was no difference between the groups. Fagan and Holland (2007) asked whites and blacks to solve problems found on usual IQ-type tests (e.g., standardized tests). Half of the items were solvable on the basis of available information, but the other items were solveable only on the basis of having acquired previous knowledge, which indicated test bais (Fagan and Holland, 2007). They, again, showed that when knowledge is equalized, so are IQ scores. Thus, cultural differences in information acquisition explain IQ scores. “There is no distinction between crassly biased IQ test items and those that appear to be non-biased” (Mensh and Mensh, 1991). This is because each item is chosen because it agrees with the distribution that the test constructors presuppose (Simon, 1997).
How do the neuropsychological studies referenced above along with Fagan and Holland’s studies show that test bias—and, along with it test construction—is built into the test which causes the distribution of the scores observed? Simple: Since the test constructors come from a higher social class, and the items chosen for inclusion on the test are more likely to be found in certain cultural groups than others, it follows that the reason for lower scores was that they were not exposed to the culturally-specific knowledge used on the test (Richardson, 2002; Hilliard, 2012).
The [IQ] tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy. (Mensh and Mensh, 1991)
This is very easily seen with how such tests are constructed. The biases go back to the beginning of standardized testing—the first one being the SAT. The tests’ constructors had an idea of who was or was not ‘intelligent’ and so constructed the tests to show what they already ‘knew.’
…as one delves further … into test construction, one finds a maze of arbitrary steps taken to ensure that the items selected — the surrogates of intelligence — will rank children of different classes in conformity with a mental hierarchy that is presupposed to exist. (Mensh and Mensh, 1991)
Garrison (2009: 5) states that standardized tests “exist to assess social function” and that “Standardized testing—or the theory and practice known as “psychometrics” … is not a form of measurment.” The same way tests were constructed in the 1900s is the same way they are constructed today—with arbitrary items and a presuppossed mental hiearchy which then become baked into the tests by virtue of how they are constructed.
IQ-ists like to say that certain genes are associated with high intelligence (using their GWASes), but what could the argument possibly be that would show that variation in SNPs would cause variation in ‘intelligence’? What would a theory of that look like? How is the hereditarian hypothesis not a just-so story? Such tests were created to justify the hierarchies in society, the tests were constructed to give the results that they get. So, I don’t see how genetic ‘explanations’ are not just-so stories.
1 Blacks and whites are different cultural groups.
2 If (1), then they will have different experiences by virtue of being different cultural groups.is
3 So blacks and whites, being different cultural groups, will score differently on tests of ability, since they are exposed to different knowledge structures due to their different cultures and so, all tests of ability are culture-bound. Knowledge, Culture, Logic, and IQ
Rushton and Jensen (2005) claim that the evidence they review over the past 30 years of IQ testing points to a ‘genetic component’ to the black-white IQ gap, relying on the flawed Minnesota study of twins “reared apart” (Joseph, 2018)—among other methods—to generate heritability estimates and state that “The new evidence reviewed here points to some genetic component in Black–White differences in mean IQ.” The concept of heritability, however, is a flawed metric (Bailey, 1997; Schonemann, 1997; Guo, 2000; Moore, 2002; Rose, 2006; Schneider, 2007; Charney, 2012, 2013; Burt and Simons, 2014; Panofsky, 2014; Joseph et al, 2015; Moore and Shenk, 2016; Panofsky, 2016; Richardson, 2017). That G and E interact means that we cannot tease out “percentages” of nature and nurture’s “contribution” to a “trait.” So, one cannot point to heritability estimates as if they point to a “genetic cause” of the score gap between blacks and whites. Further note that the gap has closed in recent years (Dickens and Flynn, 2006; Smith, 2018).
And now, here is another argument based on the differing experiences that cultural groups experience which then explains IQ score differences (eg Mensh and Mensh, 1991; Manly et al, 1997; Kwate, 2001; Fagan and Holland, 2002, 2007; Cole, 2004; Ryan et al, 2005; Boone et al, 2007; Au, 2008; Hilliard, 2012; Au, 2013).
(1) If children of different class levels have experiences of different kinds with different material; and
(2) if IQ tests draw a disproportionate amount of test items from the higher classes; then
(c) higher class children should have higher scores than lower-class children.
Nature vs nurture can be said to be a debate on what is ‘innate’ and what is ‘acquired’ in an organism. Debates about how nature and nurture tie into athletic ability and race both fall back onto the dichotomous notion. “Athleticism is innate and genetic!”, the hereditarian proclaims. “That blacks of West African ancestry are over-represented in the 100m dash is evidence of nature over nurture!” How simplistic these claims are.
Steve Sailer, in his response to Birney et al on the existence of race, assumes that because those with ancestry to West Africa consistently have produced the most finalists (and winners) in the Olympics that race, therefore, must exist.
I pointed out on Twitter that it’s hard to reconcile the current dogma about race not being a biological reality with what we see in sports, such as each of the last 72 finalists in the Olympic 100-meter dash going all the way back to 1984 nine Olympics ago being at least half sub-Saharan in ancestry.
the abundant data suggesting that individuals of sub-Saharan ancestry enjoy genetic advantages.
For example, it’s considered fine to suggest that the reason that each new Dibaba is fast is due to their shared genetics. But to say that one major reason Ethiopians keep winning Olympic running medals (now up to 54, but none at any distance shorter than the 1,500-meter metric mile because Ethiopians lack sprinting ability) is due to their shared genetics is thought unthinkable.
Sailer’s argument seems to be “Group X is better than Group Y at event A. Therefore, X and Y are races”, which is similar to the hereditarian arguments on the existence of ‘race’—just assume they exist.
The outright reductionism to genes in Sailer’s view on athleticism and race is plainly obvious. That blacks are over-represented in certain sports (e.g., football and basketball) is taken to be evidence for this type of reductionism that Sailer and others appeal to (Gnida, 1995). Such appeals can be said to be implicitly saying “The reason why blacks succeed at sport is due to genes while whites succeed due to hard work, so blacks don’t need to work as hard as whites when it comes to sports.”
There are anatomic and physiological differences between groups deemed “black” and “white”, and these differences do influence sporting success. Even though this is true, this does not mean that race exists. Such reductionist claims—as I myself have espoused years ago—do not hold up. Yes, blacks have a higher proportion of type II muscle fibers (Caesar and Henry, 2015), but this does not alone explain success in certain athletic disciplines.
Current genetic testing cannot identify an athlete (Pitsiladis et al, 2013). I reviewed some of the literature on power genotypes and race and concluded that there are no genes yet identified that can be said to be a sufficient cause of success in power sports.
Just because group A has gene or gene networks G and they compete in competition C does not mean that gene or gene networks G contribute in full—or in part—to sporting success. The correlations could be coincidental and non-functional in regard to the sport in question. Athletes should be studied in isolation, meaning just studying a specific athlete in a specific discipline to ascertain how, what, and why works for the specific athlete along with taking anthropomorphic measures, seeing how bad they want “it”, and other environmental factors such as nutrition and training. Looking at the body as a system will take us away from privileging one part over another—while we also do understand that they do play a role but not the role that reductionists believe.
No evidence exists for DNA variants that are common to endurance athletes (Rankinen et al, 2016). But they do have one thing in common (which is an environmental effect on biology): those born at altitude have a permanently altered ventilatory response as adults while “Peruvians born at altitude have a nearly 10% larger forced vital capacity compared to genetically matched Peruvians born at sea level” (Brutasaert and Parra, 2009: 16). Certain environmental effects on biology are well-known, and those biological changes do help in certain athletic events (Epstein, 2014). Yan et al (2016) conclude that “conclude that the traditional argument of nature versus nurture is no longer relevant, as it has been clearly established that both are important factors in the road to becoming an elite athlete.”
Georgiades et al (2017) go the other way and what they argue is clear in the title of their paper “Why nature prevails over nurture in the making of the elite athlete.” They continue:
Despite this complexity, the overwhelming and accumulating evidence, amounted through experimental research spanning almost two centuries, tips the balance in favour of nature in the “nature” and “nurture” debate. In other words, truly elite-level athletes are built – but only from those born with innate ability.
They use twin studies as an example stating that since heritability is greater than 50% but lower than 100% means “that the environment is also important.” But this is a strange take, especially from seasoned sports scientists (like Pitsiladis). Attempting to partition traits into a ‘nature’ and ‘nurture’ component and then argue that the emergence of that trait is due more to genetics than environment is an erroneous use of heritability estimates. It is not possible—nor is it feasible—to separate traits into genetic and environmental components. The question does not even make sense.
“… the question of how to separate the native from the acquired in the responses of man does not seem likely to be answered because the question is unintelligible.” (Leonard Carmichael 1925, quoted in Genes, Determinism and God, Alexander, 2017)
Tucker and Collins (2012) write:
Rather, individual performance thresholds are determined by our genetic make-up, and training can be defined as the process by which genetic potential is realised. Although the specific details are currently unknown, the current scientific literature clearly indicates that both nurture and nature are involved in determining elite athletic performance. In conclusion, elite sporting performance is the result of the interaction between genetic and training factors, with the result that both talent identification and management systems to facilitate optimal training are crucial to sporting success.
Tucker and Collins (2012) define training as the realization of genetic potential, while DNA “control the ceiling” of what one may be able to accomplish. “… training maximises
the likelihood of obtaining a performance level with a genetically controlled ‘ceiling’, accounts for the observed dominance of certain populations in specific sporting disciplines” (Tucker and Collins, 2012: 6). “Training” would be the environment here and the “genetically controlled ‘ceiling'” would be genes here. The authors are arguing that while training is important, training is just realizing the ‘potential’ of what is ‘already in’ the genes—an erroneous way of looking at genes. Shenk (2010: 107) explains why:
As the search for athletic genes continues, therefore, the overwhelming evidence suggests that researchers will instead locate genes prone to certain types of interactions: gene variant A in combination with gene variant B, provoked into expression by X amount of training + Y altitude + Z will to win + a hundred other life variables (coaching, injuries, etc.), will produce some specific result R. What this means, of course, What this means, of course, is that we need to dispense rhetorically with thick firewall between biology (nature) and training (nurture). The reality of GxE assures that each person’s genes interacts with his climate, altitude, culture, meals, language, customs and spirituality—everything—to produce unique lifestyle trajectories. Genes play a critical role, but as dynamic instruments, not a fixed blueprint. A seven- or fourteen- or twenty-eight-year-old is not that way merely because of genetic instruction.
The model proposed by Tucker and Collins (2012) is pretty reductionist (see Ericsson, 2012 for a response), while the model proposed by Shenk (2010) is more holistic. The hypothetical model explaining Kenyan distance running success (Wilbur and Pitsiladis, 2012) is, too, a more realistic way of assessing sport dominance:
The formation of an elite athlete comes down to a combination of genes, training, and numerous other interacting factors. The attempt to boil the appearance of a certain trait to either ‘genes’ or ‘environment’ and partition them into percentages is an unsound procedure. That a certain group continuously wins a certain event does not constitute evidence that the group in question is a race, nor does it constitute evidence that ‘genes’ are the cause of the outcome between groups in that event. The holistic model of human athletic performance in which genes contribute to certain physiological processes along with training, and other biomechanical and psychological differences is the correct way to think about sport and race. Actually seeing an athlete in motion in his preferred sport is (and I believe always will be) superior to just genetic analyses. Genetic tests also have “no role to play in talent identification” (Webborn et al, 2015).
One emerging concept is that there are many potential genetic pathways to a given phenotype . This concept is consistent with ideas that biological redundancy underpins complex multiscale physiological responses and adaptations in humans . From an applied perspective, the ideas discussed in this review suggest that talent identification on the basis of DNA testing is likely to be of limited value, and that field testing, which is essentially a higher order ‘bioassay’, is likely to remain a key element of talent identification in both the near and foreseeable future . (Joyner, 2019; Genetic Approaches for Sports Performance: How Far Away Are We?)
Athleticism is irreducible to biology (Louis, 2004). Holistic (nature and nurture) will beat the reductionist (nature vs nurture) views; with how biological systems work, there is no reason to privilege one level over another (Noble, 2012), so there is no reason to privilege the gene over the environment, environment over the gene. The interaction of multiple factors explains sport success.
Mary Midgley (1919-2018) is a philosopher perhaps most well-known for her writing on moral philosophy and rejoinders to Richard Dawkins after his publication of The Selfish Gene. Before her passing in October of 2018, she published What Is Philosophy For? on September 21st. In the book, she discusses ‘intelligence’ and its ‘measurement’ and comes to familiar conclusions.
‘Intelligence’ is not a ‘thing’ like, say, temperature and weight (though it is reified as one). Thermometers measure temperature, and this was verified without relying on the thermometer itself (see Hasok Chang, Inventing Temperature). Temperature can be measured in terms of units like kelvin, celsius, and Fahrenheit. The temperature is the available kinetic energy of heat; ‘thermo’ means heat while ‘meter’ means to measure, so heat is what is being measured with a thermometer.
Scales measure weight. If energy balance is stable, so too will weight be stable. Eat too much or too little, then weight gain or loss will occur. But animals seem to have a body set weight which has been experimentally demonstrated (Leibel, 2008). In any case, what a scale measures is the overall weight of an object which is done by measuring how much force exists between the weighed object and the earth.
The whole concept of ‘intelligence’ is hopelessly unreal.
Prophecies [like those of people who work on AI] treat intelligence as a quantifiable stuff, a standard, unvarying, substance like granulated sugar, a substance found in every kind of cake — a substance which, when poured on in larger quantities, always produces a standard improvement in performance. This mythical way of talking has nothing to do with the way in which cleverness — and thought generally — actually develops among human beings. This imagery is, in fact, about as reasonable as expecting children to grow up into steamrollers on the ground that they are already getting larger and can easily be trained to stamp down gravel on roads. In both cases, there simply is not the kind of continuity that would make any such progress conceivable. (Midgley, 2018: 98)
We recognize the divergence of interests all the time when we are trying to find suitable people for different situations. Thus Bob may be an excellent mathematician but is still a hopeless sailor, while Tim, that impressive navigator, cannot deal with advanced mathematics at all. which of them then should be considered the more intelligent? In real life, we don’t make the mistake of trying to add these people’s gifts up quantitatively to make a single composite genius and then hope to find him. We know that planners wanting to find a leader for their exploring expedition must either choose between these candidates or send both of them. Their peculiar capacities grow out of their special interests in topics, which is not a measurable talent but an integral part of their own character.
In fact, the word ‘intelligence’ does not name a single measurable property, like ‘temperature’ or ‘weight’. It is a general term like ‘usefulness’ or ‘rarity’. And general terms always need a context to give them any detailed application. It makes no more sense to ask whether Newton was more intelligent than Shakespeare than it does to ask if a hammer is more useful than a knife. There can’t be such a thing as an all-purpose intelligence, any more than an all-purpose tool. … Thus the idea of a single scale of cleverness, rising from the normal to beyond the highest known IQ, is simply a misleading myth.
It is unfortunate that we have got so used today to talk of IQs, which suggests that this sort of abstract cleverness does exist. This has happened because we have got used to ‘intelligence tests’ themselves, devices which sort people out into convenient categories for simple purposes, such as admission to schools and hospitals, in a way that seems to quantify their ability. This leads people to think that there is indeed a single quantifiable stuff called intelligence. But, for as long as these tests have been used, it has been clear that this language is too crude even for those simple cases. No sensible person would normally think of relying on it beyond those contexts. Far less can it be extended as a kind of brain-thermometer to use for measuring more complex kinds of ability. The idea of simply increasing intelligence in the abstract — rather than beginning to understand some particular kind of thing better — simply does not make sense. (Midgley, 2018: 100-101)
IQ researchers, though, take IQ to be a measure of a quantitative trait that can be measured in increments—like height, weight, and temperature. “So, in deciding that IQ is a quantitative trait, investigators are making big assumptions about its genetic and environmental background” (Richardson, 2000: 61). But there is no validity to the measure and hence no backing for the claim that it is a quantitative trait and measures what they suppose it does.
Just because we refer to something abstract does not mean that it has a referent in the real world; just because we call something ‘intelligence’ and say that it is tested—however crudely—by IQ tests does not mean that it exists and that the test is measuring it. Thermometers measure temperature; scales measure weight; IQ tests….don’t measure ‘intelligence’ (whatever that is), they measure acculturated knowledge and skills. Howe (1997: 6) writes that psychological test scores are “an indication of how well someone has performed at a number of questions that have been chosen for largely practical reasons” while Richardson (1998: 127) writes that “The most reasonable answer to the question “What is being measured?”, then, is ‘degree of cultural affiliation’: to the culture of test constructors, school teachers and school curricula.”
But the word ‘intelligence’ refers to what? The attempt to measure ‘intelligence’ is a failure as such tests cannot be divorced from their cultural contexts. This won’t stop IQ-ists, though, from claiming that we can rank one’s mind as ‘better’ than another on the basis of IQ test scores—even if they can’t define ‘intelligence’. Midgley’s chapter, while short, gets straight to the point. ‘Intelligence’ is not a ‘thing’ like height, weight, or temperature. Height can be measured by a ruler; weight can be measured by a scale; temperature can be measured by a thermometer. Intelligence? Can’t be measured by an IQ test.
Genetic reductionism refers to the belief that understanding our genes will have us understand everything from human behavior to disease. The behavioral genetic approach claims to be the best way to parse through social and biological causes of health, disease, and behavior. The aim of genetic reductionism is to reduce a complex biological system to the sum of its parts. While there was some value in doing so when our technology was in its infancy and we did learn a lot about what makes us “us”, the reductionist paradigm has outlived its usefulness.
If we want to understand a complex biological system then we shouldn’t use gene scores, heritability estimates, or gene sequencing. We should be attempting to understand how the whole biological system interacts with its surroundings—its environment.
Reductionists may claim that “gene knockout” studies can point us in the direction of genetic causation—“knockout” a gene and, if there are any changes, then we can say that that gene caused that trait. But is it so simple? Richardson (2000) puts it well:
All we know for sure is that rare changes, or mutations, in certain single genes can drastically disrupt intelligence, by virtue of the fact that they disrupt the whole system.
Noble (2011) writes:
Differences in DNA do not necessarily, or even usually, result in differences in phenotype. The great majority, 80%, of knockouts in yeast, for example, are normally ‘silent’ (Hillenmeyer et al. 2008). While there must be underlying effects in the protein networks, these are clearly buffered at the higher levels. The phenotypic effects therefore appear only when the organism is metabolically stressed, and even then they do not reveal the precise quantitative contributions for reasons I have explained elsewhere (Noble, 2011). The failure of knockouts to systematically and reliably reveal gene functions is one of the great (and expensive) disappointments of recent biology. Note, however, that the disappointment exists only in the gene-centred view. By contrast it is an exciting challenge from the systems perspective. This very effective ‘buffering’ of genetic change is itself an important systems property of cells and organisms.
Moreover, even when a difference in the phenotype does become manifest, it may not reveal the function(s) of the gene. In fact, it cannot do so, since all the functions shared between the original and the mutated gene are necessarily hidden from view. … Only a full physiological analysis of the roles of the protein it codes for in higher-level functions can reveal that. That will include identifying the real biological regulators as systems properties. Knockout experiments by themselves do not identify regulators (Davies, 2009).
All knocking-out or changing genes/alleles will do is show us that T is correlated with G, not that T is caused by G. Merely observing a correlation between a change in genes or knocking genes out will tell us nothing about biological causation. Reductionism will not have us understand the etiology of disease as the discipline of physiology is not reductionist at all—it is a holistic discipline.
Lewontin (2000: 12) writes in the introduction to The Ontogeny of Information: “But if I successfully perform knockout experiments on every gene that can be seen in such experiments to have an effect on, say, wing shape, have I even learned what causes the wing shape of one species or individual to differ from that of another? After all, two species of Drosophilia presumably have the same relevant set of loci.”
But the loss of a gene can be compensated by another gene—a phenomenon known as genetic compensation. In a complex bio-system, when one gene is knocked out, another similar gene may take the ‘role’ of the knocked-out gene. Noble (2006: 106-107) explains:
Suppose there are three biochemical pathways A, B, and C, by which a particular necessary molecule, such as a hormone, can be made in the body. And suppose the genes for A fail. What happens? The failure of the A genes will stimulate feedback. This feedback will affect what happens with the sets of genes for B and C. These alternate genes will be more extensively used. In the jargon, we have here a case of feedback regulation; the feedback up-regulates the expression levels of the two unaffected genes to compensate for the genes that got knocked out.
Clearly, in this case, we can compensate for two such failures and still be functional. Only if all three mechanisms fail does the system as a whole fail. The more parallel compensatory mechanisms an organism has, the more robust (fail-safe) will be its functionality.
The Neo-Darwinian Synthesis has trouble explaining such compensatory genetic mechanisms—but the systems view (Developmental Systems Theory, DST) does not. Even if a knockout affects the phenotype, we cannot say that that gene outright caused the phenotype, the system was screwed up, and so it responded in that way.
Genetic networks and their role in development became clear when geneticists began using genetic knockout techniques to disable genes which were known to be implicated in the development of characters but the phenotype remained unchanged—this, again, is an example of genetic compensation. Jablonka and Lamb (2005: 67) describe three reasons why the genome can compensate for the absence of a particular gene:
first, many genes have duplicate copies, so when both alleles of one copy are knocked out, the reserve copy compensates; second, genes that normally have other functions can take the place of a gene that has been knocked out; and third, the dynamic regulatory structure of the network is such that knocking out single components is not felt.
Using Waddington’s epigenetic landscape example, Jablonka and Lamb (2005: 68) go on to say that if you knocked a peg out, “processes that adjust the tension on the guy ropes from other pegs could leave the landscape essentially unchanged, and the character quite normal. … If knocking out a gene completely has no detectable effect, there is no reason why changing a nucleotide here and there should necessarily make a difference. The evolved network of interactions that underlies the development and maintenance of every character is able to accommodate or compensate for many genetic variations.”
“multiple alternative pathways . . . are the rule rather than the exception . . . such pathways can continue to function despite amino acid changes that may impair one intermediate regulator. Our results underscore the importance of systems biology approaches to understand functional and evolutionary constraints on genes and proteins.” (Quoted in Richardson, 2017: 132)
When it comes to disease, genes are said to be difference-makers—that is, the one gene difference/mutation is what is causing the disease phenotype. Genes, of course, interact with our lifestyles and they are implicated in the development of disease—as necessary, not sufficient, causes. GWA studies (genome-wide association studies) have been all the rage for the past ten or so years. And, to find diseases ‘associated’ with disease, GWA practioners take healthy people and diseased people, sequence their genomes and they then look for certain alleles that are more common in one group over the other. Alleles more common in the disease group are said to be ‘associated’ with the disease while alleles more common in the control group can be said to be protective of the disease (Kampourakis, 2017: 102). (This same process is how ‘intelligence‘ is GWASed.)
“Disease is a character difference” (Kampourakis, 2017: 132). So if disease is a character difference and differences in genes cannot explain the existence of different characters but can explain the variation in characters then the same must hold for disease.
“Gene for” talk is about the attribution of characters and diseases to DNA, even thoughit is not DNA that is directly responsible for them. … Therefore, if many genes produce or affect the production of the protein that in turn affects a character or disease, it makes no sense to identify one gene as the gene responsible “for” this character or disease. Single genes do not produce characters or disease …(Kampourakis, 2017: 134-135)
This all stems from the “blueprint metaphor”—the belief that the genome contains a blueprint for form and development. There are, however, no ‘genes for’ character or disease, therefore, genetic determinism is false.
Genes, in fact, are weakly associated with disease. A new study (Patron et al, 2019) analyzed 569 GWA studies, looking at 219 different diseases. David Scott (one of the co-authors) was interviewed by Reuters where he said:
“Despite these rare exceptions [genes accounting for half of the risk of acquiring Crohn’s, celiac and macular degeneration], it is becoming increasingly clear that the risks for getting most diseases arise from your metabolism, your environment, your lifestyle, or your exposure to various kinds of nutrients, chemicals, bacteria, or viruses,” Wishart said.
“Based on our results, more than 95% of diseases or disease risks (including Alzheimer’s disease, autism, asthma, juvenile diabetes, psoriasis, etc.) could NOT be predicted accurately from SNPs.”
It seems like this is, yet again, another failure of the reductionist paradigm. We need to understand how genes interact in the whole human biological system, not reducing our system to the sum of its parts (‘genes’). Programs like this are premised on reductionist assumptions; it seems intuitive to think that many diseases are ’caused by’ genes, as if genes are ‘in control’ of development. However, what is truly ‘in control’ of development is the physiological system—where genes are used only as resources, not causes. The reductionist (neo-Darwinist) paradigm cannot really explain genetic compensation after knocking out genes, but the systems view can. The amazing complexity of complex bio-systems allows them to buffer against developmental miscues and missing genes in order to complete the development of the organism.
Genes are not active causes, they are passive causes, resources—they, therefore, cannot cause disease and characters.
HBDers like to talk about this perception that their ideas are not really discussed in the public discourse; that the truth is somehow withheld from the public due to a nefarious plot to shield people from the truth that they so heroically attempt to get out to the dumb masses. They like to claim that the field and its practitioners are ‘silenced’, that they are rejected outright for ‘wrongthink’ ideas they hold. But if we look at what kinds of studies get out to the public, a different picture emerges.
The title of Cofnas’ (2019) paper is Research on group differences in intelligence: A defense of free inquiry; the title of Carl’s (2018) paper is How Stifiling Debate Around Race, Genes and IQ Can Do Harm; and the title of Meisenberg’s (2019) paper is Should Cognitive Differences Research Be Forbidden? Meisenberg’s paper is the most direct response to my most recent article, an argument to ban IQ tests due to the class/racial bias they hold that then may be used to enact undesirable consequences on a group that scores low—but like all IQ-ists, they assume that IQ tests are tests of intelligence which is a dubious assumption. In any case, these three authors seem to think there is a silencing of their work.
For Darwin200 (his 200th birthday) back in 2009, the question “Should scientists study race and IQ” was asked in the journal Nature. Neuroscientist Steven Rose (2009: 788) stated “No”, writing:
The problem is not that knowledge of such group intelligence differences is too dangerous, but rather that there is no valid knowledge to be found in this area at all. It’s just ideology masquerading as science.
Ceci and Williams (2009: 789) answered “Yes” to the question, writing:
When scientists are silenced by colleagues, administrators, editors and funders who think that simply asking certain questions is inappropriate, the process begins to resemble religion rather than science. Under such a regime, we risk losing a generation of desperately needed research.
John Horgan wrote in Scientific American:
But another part of me wonders whether research on race and intelligence—given the persistence of racism in the U.S. and elsewhere–should simply be banned. I don’t say this lightly. For the most part, I am a hard-core defender of freedom of speech and science. But research on race and intelligence—no matter what its conclusions are—seems to me to have no redeeming value.
And when he says that “research on race and intelligence … should simply be banned“, he means:
Institutional review boards (IRBs), which must approve research involving human subjects carried out by universities and other organizations, should reject proposed research that will promote racial theories of intelligence, because the harm of such research–which fosters racism even if not motivated by racism–far outweighs any alleged benefits. Employing IRBs would be fitting, since they were formed in part as a response to the one of the most notorious examples of racist research in history, the Tuskegee Syphilis Study, which was carried out by the U.S. Public Health Service from 1932 to 1972.
At the end of the 2000s, journalist William Saletan was big in the ‘HBD-sphere’ due to his writings on sport and race and race and IQ. But in 2018 after the Harris/Murray fiasco on Harris’ podcast, Saletan wrote:
Many progressives, on the other hand, regard the whole topic of IQ and genetics and sinister. That too is a mistake. There’s a lot of hard science here. It can’t be wished away, and it can be put to good use. The challenge is to excavate that science from the muck of speculation about racial hierarchies.
What’s the path forward? It starts with letting go of race talk. No more podcasts hyping gratuitous racial comparisons as “forbidden knowledge.” No more essays speaking of grim ethnic truths for which, supposedly, we must prepare. Don’t imagine that if you posit an association between race and some trait, you can add enough caveats to erase the impression that people can be judged by their color. The association, not the caveats, is what people will remember.
If you’re interested in race and IQ, you might bristle at these admonitions. Perhaps you think you’re just telling the truth about test scores, IQ heritability, and the biological reality of race. It’s not your fault, you might argue, that you’re smeared and misunderstood. Harris says all of these things in his debate with Klein. And I cringe as I hear them, because I know these lines. I’ve played this role. Harris warns Klein that even if we “make certain facts taboo” and refuse “to ever look at population differences, we will be continually ambushed by these data.” He concludes: “Scientific data can’t be racist.”
Of course “scientific data can’t be racist”, but the data can be used by racists for racist motives and the tool to collect the data could be inherently biased against certain groups meaning they favor certain groups too.
Saletan claims that IQ tests can be ‘put to good use’, but it is “illogical” to think that the use of IQ tests was negative and then positive in other instances; it’s either one or the other, you cannot hold that IQ testing is good here and bad there.
Callier and Bonham (2015) write:
These types of assessments cannot be performed in a vacuum. There is a broader social context with which all investigators must engage to create meaningful and translatable research findings, including intelligence researchers. An important first step would be for the members of the genetics and behavioral genetics communities to formally and directly confront these challenges through their professional societies and the editorial boards of journals.
If traditional biases triumph over scientific rigor, the research will only exacerbate existing educational and social disparities.
Tabery (2015) states that:
it is important to remember that even if the community could keep race research at bay and out of the newspaper headlines, research on the genetics of intelligence would still not be expunged of all controversy.
IQ “science” is a subfield of behavioral genetics; so the overarching controversy is on ‘behavioral genetics’ (see Panofsky, 2014). You would expect there to hardly be any IQ research reported in mainstream outlets with how Cofnas (2019), Carl (2018) and Meisenberg (2019) talk about race and IQ. But that’s not what we find. What we find when we look at what is published regarding behavioral genetic studies compared to regular genetic studies is a stark contrast.
Society at large already harbors genetic determinist attitudes and beliefs, and what the mainstream newspapers put out then solidifies the false beliefs of the populace. Even then, a more educated populace in regard to genes and trait ontogeny will not necessarily make them supportive of new genetics research and discoveries; they are even critical of such studies (Etchegary et al, 2012). Schweitzer and Saks (2007) showed that the popular show CSI pushes false concepts of genetic testing on the public, showing that DNA testing is quick, reliable, and prosecutes many cases; about 40 percent of the ‘science’ used on CSI does not exist and this, too, promulgates false beliefs about genetics in society. Lewis et al (2000) asked schoolchildren “Why are genes important?”, to which 73 percent responded that they are important because they determine characters while 14 percent responded that they are important because they transfer information, but none spoke of gene products.
In the book Genes, Determinism and God, Denis Alexander (2017: 18) states that:
Much data suggest that the stories promulgated by the kind of ‘elite media’ stories cited previously do not act as ‘magic bullets’ to be instantly absorbed by the reader, but rather are resisted, critiqued or accepted depending on the reader’s economic interests, health and social status and access to competing discourses. A recurring theme is that people dis[lplay a ‘two-track model’ in which they can readily switch between more genetic deterministic explanations for disease or different behaviors and those which favour environmental factors or human choice (Condit et al., 2009).
The so-called two-track model is simple: one holds genetic determinist beliefs for a certain trait, like heart disease or diabetes, but then contradict themselves and state that diet and exercise can ameliorate any future complications (Condit, 2010). Though, holding “behavioral causal beliefs” (that one’s behavior is causal in regard to disease acquisition) is associated with behavioral change (Nguyen et al, 2015). This seems to be an example of what Bo Wingard means when he uses the term “selective blank slatism” or “ideologically motivated blank slatism.” That one’s ideology motivates them to believe that genes are causal regarding health, intelligence, and disease or reject the claim must be genetically mediated too. So how can we ever have objective science if people are biased by their genetics?
Condit (2011: 625) compiled a chart showing people’s attitudes to how ‘genetic’ a trait is or not:
Clearly, the public understands genes as playing more of a role when it comes to bodily traits and environment plays more of a role when it comes to things that humans have agency over—for things relating to the mind (Condit and Shen, 2011). “… people seem to deploy elements of fatalism or determinism into their worldviews or life goals when they suit particular ends, either in ways that are thought to ‘explain’ why other groups are the way they are or in ways that lessen their own sense of personal responsibility (Condit, 2011)” (Alexander, 2017: 19).
So, behavioral geneticists must be silenced, right? Bubela and Caufield (2004: 1402) write:
Our data may also indicate a more subtle form of media hype, in terms of what research newspapers choose to cover. Behavioural genetics and neurogenetics were the subject of 16% of the newspaper articles. A search of PubMed on May 30, 2003, with the term “genetics” yielded 1 175 855 hits, and searches with the terms “behavioural genetics” and “neurogenetics” yielded a total of 3587 hits (less than 1% of the hits for “genetics”).
So Bubela and Caufield (2004) found that 11 percent of the articles they looked at had moderately to highly exaggerated claims, while 26 percent were slightly exaggerated. Behavioral genetic/neurogenetic stories comprised 16 percent of the articles they found, while one percent of all academic press articles were on genetics, which “might help explain why the reader gains the impression that much of genetics research is directed towards explaining human behavior; such copy makes newsworthy stories for obvious reasons” (Alexander, 2017: 17-18). Behavioral genetics research is indeed silenced!
The public perception of genetics seems to line-up with that of genetics researchers in some ways but not in others. The public at large is bombarded with numerous messages per day, especially in the TV programs they watch (inundated with ad after ad). Certain researchers claim that ‘free inquiry’ into race and IQ is being hushed. To Cofnas (2019) I would say, “In virtue of what is it ‘free inquiry’ that we should study how a group handles an inherently biased test?” To Carl (2018) I would say, “What about the harm done assuming that the hereditarian hypothesis is true, that IQ tests test intelligence, and the funneling of minority children into EMR classes?” And to Meisenberg (2019) I would say “The answer to the question “Should research into cognitive differences be forbidden?” should be “Yes, they should be forbidden and banned since no good can come from a test that was biased from its very beginnings.” There is no ‘good’ that can come from using inherently biased tests, which is why the hereditarian-environmentalist debate on IQ is useless.
It is due to newspapers and other media outlets that people hold the beliefs on genetics they do. Behavioral genetics studies are overrepresented in newspapers; IQ is a subfield of behavioral genetics. Is contemporary research ignored in the mainstream press? Not at all. Recent articles on the social stratification of Britain have been in the mainstream press—so what are Cofnas, Carl, and Meisenberg complaining about? It seems it just stems from a persecution complex; to be seen as the new ‘Galileo’ who, in the face of oppression told the truth that others did not want to hear so they attempted to silence him.
Well that’s not what is going on here, as behavioral genetic studies are constantly pushed in the mainstream press; the complaining from the aforementioned authors means nothing; look to the newspapers and the public’s perception of genes to see that the claims from the authors are false.
Project Coast was a secret biological/chemical weapons program developed by the apartheid government in South Africa started by a cardiologist named Wouter Basson. One of the many things they attempted was to develop a bio-chemical weapon that targets blacks and only blacks.
I used to listen to the Alex Jones show in the beginning of the decade and in one of his rants, he brought up Project Coast and how they attempted to develop a weapon to only target blacks. So I looked into it, and there is some truth to it.
For instance, The Washington Times writes in their article Biotoxins Fall Into Private Hands:
More sinister were the attempts — ordered by Basson — to use science against the country’s black majority population. Daan Goosen, former director of Project Coast’s biological research division, said he was ordered by Basson to develop ways to suppress population growth among blacks, perhaps by secretly applying contraceptives to drinking water. Basson also urged scientists to search for a “black bomb,” a biological weapon that would select targets based on skin color, he said.
“Basson was very interested. He said ‘If you can do this, it would be very good,'” Goosen recalled. “But nothing came of it.”
They created novel ways to disperse the toxins: using letters and cigarettes to transmit anthrax to black communities (something those old enough to be alive during 911 know of), lacing sugar cubes with salmonella, lacing beer and peppermint candy with poison.
Project Coast was, at its heart, a eugenics program (Singh, 2008). Singh (2008: 9) writes, for example that “Project Coast also speaks for the need for those involved in scientific research and practice to be sensitized to appreciate the social circumstances and particular factors that precipitate a loss of moral perspective on one’s actions.”
Jackson (2015) states that another objective of the Project was to develop anti-fertility drugs and attempt to distribute them into the black population in South Africa to decrease birth rates. They also attempted to create vaccines to make black women sterile to decrease the black population in South Africa in a few generations—along with attempting to create weapons to only target blacks.
The head of the weapons program, Wouter Basson, is even thought to have developed HIV with help from the CIA to cull the black population (Nattrass, 2012). There are many conspiracy theories that involve HIV and its creation to cull black populations, though they are pretty farfetched. In any case, though, since they were attempting to develop new kinds of bioweapons to target certain populations, it’s not out of the realm of possibility that there is a kernel of truth to the story.
So now we come to today. So Kyle Bass said that the Chinese already have access to all of our genomes, through companies like Steve Hsu’s BGI, stating that “there’s a Chinese company called BGI that does the overwhelming majority of all the sequencing of U.S. genes. … China had the genomic sequence of every single person that’s been gene types in the U.S., and they’re developing bio weapons that only affect Caucasians.”
I have no way to verify these claims (they’re probably bullshit), but with what went on in the 80s and 90s in South Africa with Project Coast, I don’t believe it’s outside of the realm of plausibility. Though Caucasians are a broad grouping.
It’d be like if someone attempted to develop a bioweapon that only targets Ashkenazi Jews. They could let’s say, attempt to make a bioweapon to target those with Tay Sach’s disease. It’s, majorly, a Jewish disease, though it’s also prevalent in other populations, like French Canadians. It’d be like if someone attempted to develop a bioweapon that only targets those with the sickle cell trait (SCT). Certain African ethnies are more like to carry the trait, but it’s also prevalent in southern Europe and Northern Africa since the trait is prevalent in areas with many mosquitoes.
With Chinese scientists like He Jiankui CRISPR-ing two Chinese twins back in 2018 to attempt to edit their genome to make them less susceptible to HIV, I can see a scientist in China attempt to do something like this. In our increasingly technological world with all of these new tools we develop, I would be surprised if there was nothing strange like this going on.
Some claim that “China will always be bad at bioethics“:
Even when ethics boards exist, conflicts of interest are rife. While the Ministry of Health’s ethics guidelines state that ethical reviews are “based upon the principles of ethics accepted by the international community,” they lack enforcement mechanisms and provide few instructions for investigators. As a result, the ethics review process is often reduced to a formality, “a rubber stamp” in Hu’s words. The lax ethical environment has led many to consider China the “Wild East” in biomedical research. Widely criticized and rejected by Western institutions, the Italian surgeon Sergio Canavero found a home for his radical quest to perform the first human head transplant in the northern Chinese city of Harbin. Canavero’s Chinese partner, Ren Xiaoping, although specifying that human trials were a long way off, justified the controversial experiment on technological grounds, “I am a scientist, not an ethical expert.” As the Chinese government props up the pseudoscience of traditional Chinese medicine as a valid “Eastern” alternative to anatomy-based “Western” medicine, the utterly unscientific approach makes the establishment of biomedical regulations and their enforcement even more difficult.
Chinese ethicists, though, did respond to the charge of a ‘Wild East’, writing:
Some commentators consider Dr. He’s wrongdoings as evidence of a “Wild East” in scientific ethics or bioethics. This conclusion is not based on facts but on stereotypes and is not the whole story. In the era of globalization, rule-breaking is not limited to the East. Several cases of rule-breaking in research involved both the East and the West.
Henning (2006) notes that “bioethical issues in China are well covered by various national guidelines and regulations, which are clearly defined and adhere to internationally recognized standards. However, the implementation of these rules remains difficult, because they provide only limited detailed instructions for investigators.” With a large country like China, of course, it will be hard to implement guidelines on a wide-scale.
Gene-edited humans were going to come sooner or later, but the way that Jiankui went about it was all wrong. Jiankjui raised funds, dodged supervision and organized researchers in order to carry out the gene-editing on the Chinese twins. “Mad scientists” are, no doubt, in many places in many countries. “… the Chinese state is not fundamentally interested in fostering a culture of respect for human dignity. Thus, observing bioethical norms run second.”
Countries attempting to develop bioweapons to target specific groups of people have already been attempted recently, so I wouldn’t doubt that someone, somewhere, is attempting something along these lines. Maybe it is happening in China, a ‘Wild East’ of low regulations and oversight. There is a bioethical divide when it comes to East and West, which I would chalk up to differences in collectivism vs individualism (which some have claimed to be ‘genetic’ in nature; Kiaris, 2012). Since the West is more individualistic, they would care about individual embryos which eventually become a person; since the East is more collectivist, whatever is better for the group (that is, whatever can eventually make the group ‘better’) will override the individual and so, tinkering with individual genomes would be seen as less of an ethical charge to them.
Jewish IQ is one of the most-talked-about things in the hereditarian sphere. Jews have higher IQs, Cochran, Hardy, and Harpending (2006: 2) argue due to “the unique demography and sociology of Ashkenazim in medieval Europe selected for intelligence.” To IQ-ists, IQ is influenced/caused by genetic factors—while environment accounts for only a small portion.
“Fourth, other environmentalists such as Majoribanks (1972) have argued that the high intelligence of the Ashkenazi Jews is attributable to the typical “pushy Jewish mother”. In a study carried out in Canada he compared 100 Jewish boys aged 11 years with 100 Protestant white gentile boys and 100 white French Canadians and assessed their mothers for “Press for Achievement”, i.e. the extent to which mothers put pressure on their sons to achieve. He found that the Jewish mothers scored higher on “Press for Achievement” than Protestant mothers by 5 SD units and higher than French Canadian mothers by 8 SD units and argued that this explains the high IQ of the children. But this inference does not follow. There is no general acceptance of the thesis that pushy mothers can raise the IQs of their children. Indeed, the contemporary consensus is that family environmental factors have no long term effect on the intelligence of children (Rowe, 1994).
The inference is a modus ponens:
P1 If p, then q.
C Therefore q.
Let p be “Jewish mothers scored higher on “Press for Achievement” by X SDs” and let q be “then this explains the high IQ of the children.”
So now we have:
Premise 1: If “Jewish mothers scored higher on “Press for Achievement” by X SDs”, then “this explains the high IQ of the children.”
Premise 2: “Jewish mothers scores higher on “Press for Achievement” by X SDs.”
Conclusion: Therefore, “Jewish mothers scoring higher on “Press for Achievement” by X SDs” so “this explains the high IQ of the children.”
Vaughn (2008: 12) notes that an inference is “reasoning from a premise or premises to … conclusions based on those premises.” The conclusion follows from the two premises, so how does the inference not follow?
IQ tests are tests of specific knowledge and skills. It, therefore, follows that, for example, if a “mother is pushy” and being pushy leads to studying more then the IQ of the child can be raised.
Looking at Lynn’s claim that “family environmental factors have no long term effect on the intelligence of children” is puzzling. Rowe relies heavily on twin and adoption studies which have false assumptions underlying them, as noted by Richardson and Norgate (2005), Moore (2006), Joseph (2014), Fosse, Joseph, and Richardson (2015), Joseph et al (2015). The EEA is false so we, therefore, cannot accept the genetic conclusions from twin studies.
Lynn and Kanazawa (2008: 807) argue that their “results clearly support the high intelligence theory of Jewish achievement while at the same time provide no support for the cultural values theory as an explanation for Jewish success.” They are positing “intelligence” as an explanatory concept, though Howe (1988) notes that “intelligence” is “a descriptive measure, not an explanatory concept.” “Intelligence, says Howe (1997: ix) “is … an outcome … not a cause.” More specifically, it is an outcome of development from infancy all the way up to adulthood and being exposed to the items on the test. Lynn has claimed for decades that high intelligence explains Jewish achievement. But whence came intelligence? Intelligence develops throughout the life cycle—from infancy to adolescence to adulthood (Moore, 2014).
Ogbu and Simon (1998: 164) notes that Jews are “autonomous minorities”—groups with a small number. They note that “Although [Jews, the Amish, and Mormons] may suffer discrimination, they are not totally dominated and oppressed, and their school achievement is no different from the dominant group (Ogbu 1978)” (Ogbu and Simon, 1998: 164). Jews are voluntary minorities, and voluntary minorities, according to Ogbu (2002: 250-251; in Race and Intelligence: Separating Science from Myth) suggests five reasons for good test performance from these types of minorities:
- Their preimmigration experience: Some do well since they were exposed to the items and structure of the tests in their native countries.
- They are cognitively acculturated: They acquired the cognitive skills of the white middle-class when they began to participate in their culture, schools, and economy.
- The history and incentive of motivation: They are motivated to score well on the tests as they have this “preimmigration expectation” in which high test scores are necessary to achieve their goals for why they emigrated along with a “positive frame of reference” in which becoming successful in America is better than becoming successful at home, and the “folk theory of getting ahead in the United States”, that their chance of success is better in the US and the key to success is a good education—which they then equate with high test scores.
So if ‘intelligence’ is a test of specific culturally-specific knowledge and skills, and if certain groups are exposed more to this knowledge, it then follows that certain groups of people are better-prepared for test-taking—specifically IQ tests.
The IQ-ists attempt to argue that differences in IQ are due, largely, to differences in ‘genes for’ IQ, and this explanation is supposed to explain Jewish IQ, and, along with it, Jewish achievement. (See also Gilman, 2008 and Ferguson, 2008 for responses to the just-so storytelling from Cochran, Hardy, and Harpending, 2006.) Lynn, purportedly, is invoking ‘genetic confounding’—he is presupposing that Jews have ‘high IQ genes’ and this is what explains the “pushiness” of Jewish mothers. The Jewish mothers then pass on their “genes for” high IQ—according to Lynn. But the evolutionary accounts (just-so stories) explaining Jewish IQ fail. Ferguson (2008) shows how “there is no good reason to believe that the argument of [Cochran, Hardy, and Harpending, 2006] is likely, or even reasonably possible.” The tall-tale explanations for Jewish IQ, too, fail.
Prinz (2014: 68) notes that Cochran et al have “a seductive story” (aren’t all just-so stories seductive since they are selected to comport with the observation? Smith, 2016), while continuing (pg 71):
The very fact that the Utah researchers use to argue for a genetic difference actually points to a cultural difference between Ashkenazim and other groups. Ashkenazi Jews may have encouraged their children to study maths because it was the only way to get ahead. The emphasis remains widespread today, and it may be the major source of performance on IQ tests. In arguing that Ashkenazim are genetically different, the Utah researchers identify a major cultural difference, and that cultural difference is sufficient to explain the pattern of academic achievement. There is no solid evidence for thinking that the Ashkenazim advantage in IQ tests is genetically, as opposed to culturally, caused.
Nisbett (2008: 146) notes other problems with the theory—most notably Sephardic over-achievement under Islam:
It is also important to the Cochran theory that Sephardic Jews not be terribly accomplished, since they did not pass through the genetic filter of occupations that demanded high intelligence. Contemporary Sephardic Jews in fact do not seem to haave unusally high IQs. But Sephardic Jews under Islam achieved at very high levels. Fifteen percent of all scientists in the period AD 1150-1300 were Jewish—far out of proportion to their presence in the world population, or even the population of the Islamic world—and these scientists were overwhelmingly Sephardic. Cochran and company are left with only a cultural explanation of this Sephardic efflorescence, and it is not congenial to their genetic theory of Jewish intelligence.
Finally, Berg and Belmont (1990: 106) note that “The purpose of the present study was to clarify a possible misinterpretation of the results of Lesser et al’s (1965) influential study that suggested that existence of a “Jewish” pattern of mental abilities. In establishing that Jewish children of different socio-cultural backgrounds display different patterns of mental abilities, which tend to cluster by socio-cultural group, this study confirms Lesser et al’s position that intellectual patterns are, in large part, culturally derived.” Cultural differences exist; cultural differences have an effect on psychological traits; if cultural differences exist and cultural differences have an effect on psychological traits (with culture influencing a population’s beliefs and values) and IQ tests are culturally-/class-specific knowledge tests, then it necessarily follows that IQ differences are cultural/social in nature, not ‘genetic.’
In sum, Lynn’s claim that the inference does not follow is ridiculous. The argument provided is a modus ponens, so the inference does follow. Similarly, Lynn’s claim that “pushy Jewish mothers” don’t explain the high IQs of Jews doesn’t follow. If IQ tests are tests of middle-class knowledge and skills and they are exposed to the structure and items on them, then it follows that being “pushy” with children—that is, getting them to study and whatnot—would explain higher IQs. Lynn’s and Kanazawa’s assertion that “high intelligence is the most promising explanation of Jewish achievement” also fails since intelligence is not an explanatory concept—a cause—it is a descriptive measure that develops across the lifespan.
Why do some groups of people use chopsticks and others do not? Years back, created a thought experiment. So he found a few hundred students from a university and gathered DNA samples from their cheeks which were then mapped for candidate genes associated with chopstick use. Come to find out, one of the associated genetic markers was associated with chopstick use—accounting for 50 percent of the variation in the trait (Hamer and Sirota, 2000). The effect even replicated many times and was highly significant: but it was biologically meaningless.
One may look at East Asians and say “Why do they use chopsticks” or “Why are they so good at using them while Americans aren’t?” and come to such ridiculous studies such as the one described above. They may even find an association between the trait/behavior and a genetic marker. They may even find that it replicates and is a significant hit. But, it can all be for naught, since population stratification reared its head. Population stratification “refers to differences in allele frequencies between cases and controls due to systematic differences in ancestry rather than association of genes with disease” (Freedman et al, 2004). It “is a potential cause of false associations in genetic association studies” (Oetjens et al, 2016).
Such population stratification in the chopsticks gene study described above should have been anticipated since they studied two different populations. Kaplan (2000: 67-68) described this well:
A similar argument, bu the way, holds true for molecular studies. Basically, it is easy to mistake mere statistical associations for a causal connection if one is not careful to properly partition one’s samples. Hamer and Copeland develop and amusing example of some hypothetical, badly misguided researchers searching for the “successful use of selected hand instruments” (SUSHI) gene (hypothesized to be associated with chopstick usage) between residents in Tokyo and Indianapolis. Hamer and Copeland note that while you would be almost certain to find a gene “associated with chopstick usage” if you did this, the design of such a hypothetical study would be badly flawed. What would be likely to happen here is that a genetic marker associated with the heterogeneity of the group involved (Japanese versus Caucasian) would be found, and the heterogeneity of the group involved would independently account for the differences in the trait; in this case, there is a cultural tendency for more people who grow up in Japan than people who grow up in Indianapolis to learn how to use chopsticks. That is, growing up in Japan is the causally important factor in using chopsticks; having a certain genetic marker is only associated with chopstick use in a statistical way, and only because those people who grow up in Japan are also more likely to have the marker than those who grew up in Indianapolis. The genetic marker is in no way causally related to chopstick use! That the marker ends up associated with chopstick use is therefore just an accident of design (Hamer and Copeland, 1998, 43; Bailey 1997 develops a similar example).
In this way, most—if not all—of the results of genome-wide association studies (GWASs) can be accounted for by population stratification. Hamer and Sirota (2000) is a warning to psychiatric geneticists to not be quick to ascribe function and causation to hits on certain genes from association studies (of which GWASs are).
Many studies, for example, Sniekers et al (2017), Savage et al (2018) purport to “account for” less than 10 percent of the variance in a trait, like “intelligence” (derived from non-construct valid IQ tests). Other GWA studies purport to show genes that affect testosterone production and that those who have a certain variant are more likely to have low testosterone (Ohlsson et al, 2011). Population stratification can have an effect here in these studies, too. GWASs; they give rise to spurious correlations that arise due to population structure—which is what GWASs are actually measuring, they are measuring social class, and not a “trait” (Richardson, 2017b; Richardson and Jones, 2019). Note that correcting for socioeconomic status (SES) fails, as the two are distinct (Richardson, 2002). (Note that GWASs lead to PGSs, which are, of course, flawed too.)
Such papers presume that correlations are causes and that interactions between genes and environment either don’t exist or are irrelevant (see Gottfredson, 2009 and my reply). Both of these claims are false. Correlations can, of course, lead to figuring out causes, but, like with the chopstick example above, attributing causation to things that are even “replicable” and “strongly significant” will still lead to false positives due to that same population stratification. Of course, GWAS and similar studies are attempting to account for the heriatbility estimates gleaned from twin, family, and adoption studies. Though, the assumptions used in these kinds of studies are shown to be false and, therefore, heritability estimates are highly exaggerated (and flawed) which lead to “looking for genes” that aren’t there (Charney, 2012; Joseph et al, 2016; Richardson, 2017a).
Richardson’s (2017b) argument is simple: (1) there is genetic stratification in human populations which will correlate with social class; (2) since there is genetic stratification in human populations which will correlate with social class, the genetic stratification will be associated with the “cognitive” variation; (3) if (1) and (2) then what GWA studies are finding are not “genetic differences” between groups in terms of “intelligence” (as shown by “IQ tests”), but population stratification between social classes. Population stratification still persists even in “homogeneous” populations (see references in Richardson and Jones, 2019), and so, the “corrections for” population stratification are anything but.
So what accounts for the small pittance of “variance explained” in GWASs and other similar association studies (Sniekers et al, 2017 “explained” less than 5 percent of variance in IQ)? Population stratification—specifically it is capturing genetic differences that occurred through migration. GWA studies use huge samples in order to find the genetic signals of the genes of small effect that underline the complex trait that is being studied. Take what Noble (2018) says:
As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (13, 21). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3).
Calude and Longo (2016; emphasis theirs) “prove that very large databases have to contain arbitrary correlations. These correlations appear only due to the size, not the nature, of data. They can be found in “randomly” generated, large enough databases, which — as we will prove — implies that most correlations are spurious.”
So why should we take association studies seriously when they fall prey to the problem of population stratification (measuring differences between social classes and other populations) along with the fact that big datasets lead to spurious correlations? I fail to think of a good reason why we should take these studies seriously. The chopsticks gene example perfectly illustrates the current problems we have with GWASs for complex traits: we are just seeing what is due to social—and other—stratification between populations and not any “genetic” differences in the trait that is being looked at.
I started this blog in June of 2015. I recall thinking of names for the blog, trying “politcallyincorrect.com” at first, but the domain was taken. I then decided on the name “notpoliticallycorrect.me”. Back then, of course, I was a hereditarian pushing the likes of Rushton, Kanazawa, Jensen, and others. I, to be honest, could never ever see myself disbelieving the “fact” that certain races were more or less intelligent than others, it was preposterous, I used to believe. IQ tests served as a completely scientific instrument which showed, however crudely, that certain races were more intelligent than others. I held these beliefs for around two years after the creation of this blog.
Back then, I used to go to Barnes n Noble and of course, go and browse the biology section, choose a book and drink coffee all day while reading. (I was drinking black coffee, of course.) I recall back in April of 2017 seeing this book DNA Is Not Destiny: The Remarkable, Completely Misunderstood Relationship between You and Your Genes on the shelf in the biology section. The baby blue cover of the book caught my eye—but I scoffed at the title. DNA most definitely was destiny, I thought. Without DNA we could not be who we were. I ended up buying the book and reading it. It took me about a week to finish it and by the end of the book, Heine had me questioning my beliefs.
In the book, Heine discusses IQ, heritability, genes, DNA testing to catch diseases, the MAOA gene, and so on. All in all, the book is against genetic essentialism which is rife in public—and even academic—thought.
After I read DNA Is Not Destiny, the next few weeks I went to Barnes n Noble I would keep seeing Ken Richardson’s Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. I recall scoffing even more at the title than I did Heine’s book. Nevertheless, I did not buy the book but I kept seeing it every time I went. When I finally bought the book, my worldview was then transformed. Before, I thought of IQ tests as being able to—however crude—measure intelligence differences between individuals and groups. The number that spits out was one’s “intelligence quotient”, and there was no way to raise it—but of course there were many ways to decrease it.
But Richardson’s book showed me that there were many biases implicit in the study of “intelligence”, both conscious and unconscious. The book showed me the many false assumptions that IQ-ists make when constructing tests. Perhaps most importantly, it showed me that IQ test scores were due to one’s social class—and that social class encompasses many other variables that affect test performance, and so stating that IQ tests are instruments to identify one’s social class due to the construction of the test seemed apt—especially due to the content on the test along with the fact that the tests were created by members of a narrow upper-class. This, to me, ensured that the test designers would get the result they wanted.
Not only did this book change my views on IQ, but I did a complete 180 on evolution, too (which Fodor and Pitattelli-Palmarini then solidified). Richardson in chapters 4 and 5 shows that genes don’t work the way most popularly think they do and that they are only used by and for the physiological system to carry out different processes. I don’t know which part of this book—the part on IQ or evolution—most radically changed my beliefs. But after reading Richardson, I did discover Susan Oyama, Denis Noble, Eva Jablonka and Marion Lamb, David Moore, David Shenk, Paul Griffiths, Karola Stotz, Jerry Fodor. and others who opposed the Neo-Darwinian Modern Synthesis.
Richardson’s most recent book then lead me to his other work—and that of other critics of IQ and the current neo-Darwinian Modern Synthesis—and from then on, I was what most would term an “IQ-denier”—since I disbelieve the claim that IQ tests test intelligence, and an “evolution denier”—since I deny the claim that natural selection is a mechanism. In any case, the radical changes in both of my what I would term major views I held were slow-burning, occurring over the course of a few months.
This can be evidenced by just reading the archives of this blog. For example, check the archives from May 2017 and read my article Height and IQ Genes. One can then read the article from April 2017 titled Reading Wrongthought Books in Public to see that over a two-month period that my views slowly began to shift to “IQ-denalism” and that of the Extended Evolutionary Synthesis (EES). Of course, in June of 2017, after defending Rushton’s r/K selection theory for years, I recanted on those views, too, due to Anderson’s (1991) rebuttal of Rushton’s theory. That three-month period from April-June was extremely pivotal in shaping the current views I have today.
After reading those two books, my views about IQ shifted from that of one who believed that nothing could ever shake his belief in them to one of the most outspoken critics of IQ in the “HBD” community. But the views on evolution that I now hold may be more radical than my current views on IQ. This is because Darwin himself—and the theory he formulated—is the object of attack, not a test.
The views I used to hold were staunch; I really believed that I would never recant my views, because I was privy to “The Truth ™” and everyone else was just a useful idiot who did not believe in the reality of intelligence differences which IQ tests showed. Though, my curiosity got the best of me and I ended up buying two books that radically shifted my thoughts on IQ and along with that evolution itself.
So why did I change my views on IQ and evolution? I changed my views due to conceptual and methodological problems on both points that Richardson and Heine pointed out to me. These view changes I underwent more than two years ago were pretty shocking to me. As I realized that my views were beginning to shift, I couldn’t believe it, since I recall saying to myself “I’ll never change my views.” the inadequacy of the replies to the critics was yet another reason for the shift.
It’s funny how things work out.