Home » IQ
Category Archives: IQ
The hereditarian-environmentalist debate has been ongoing for over 100 years. In this time frame, many theories have been forwarded to explain the disparity between individuals and groups. In one camp you have the hereditarians who claim that any non-zero heritability for IQ scores means that hereditarianism is true (eg Warne, 2020); while in the other camp you have the environmentalists who claim that differences in IQ are explained by environmental factors. This debate has been raging since the 1870s when Francis Galton coined the “nature-nurture” dichotomy still rages today. Unfortunately, the environmentalists lend credence to IQ-ist claims that, however imperfect, IQ tests are “measures” of intelligence.
Three recent books on the matter are A Terrible Thing to Waste: Environmental Racism and its Assault on the American Mind (Washington, 2019), Making Kids Cleverer: A Manifesto for Closing the Advantage Gap (Didau, 2019), and Young Minds Wasted: Reducing Poverty by Enhancing Intelligence in Known Ways (Schick, 2019). All three of these authors are clearly environmentalists and they accept the IQ-ist canard that IQ—however crudely—is a “measure” of “intelligence.”
There are, however, no sound arguments that IQ tests “measure” intelligence and there is no response to the Berka/Nash measurement objection for the claim that IQ tests are a “measure” since no hereditarian can articulate the specified measured object, the object of measurement and the measurement unit for IQ; there is, also, no accepted definition or theory of “intelligence”. So how can we say that some”thing” is being “measured” with a certain instrument if we have no satisfactorily defined what we claim to be measuring with a well-accepted theory of what we are measuring (Richardson and Norgate, 2015; Richardson, 2017), with a specified measured object, object of measurement, and measurement unit (Berka, 1983a, 1983b; Nash, 1990; Garrison, 2003, 2009) for the construct we want to measure?
But the point of this article is that environmentalists push the hereditarian canard that IQ is equal to, however crudely, intelligence. And though the authors do have great intentions and are pointing to things that we can do to attempt to ameliorate differences between individuals in different environments, they still lend credence to the hereditarian program.
A Terrible Thing to Waste
Washington (2019) discusses the detrimental effects (and possible effects of others) of lead, mercury and other metals that are more likely to be found in low-income black and “Hispanic” communities along with iodine deficiencies. These environmental exposures retard normal brain development. But, one is not justified in claiming that they are measures of “intelligence”—at best, as Washington (2019) argues, we can claim that they are indexes of environmental polluters on the brains of developing children.
Intelligence is a product of environment and experience that is forged, not inherited; it is malleable, not fixed. (Washington, 2019: 20)
While it is true, as Washington claims, that we can mitigate these problems from the toxic metals and lack of other pertinent nutrients for brain development by addressing the problems in these communities, it does not follow that IQ is a “biological” thing. Yes, IQ is malleable (contra hereditarian claims), and Headstart does work to improve life outcomes, even though such gains “fade out” after the child leaves the enriched environment. Lead poisoning, for example, has led to a decrease in 23 million IQ points per year (Washington, 2019: 15). But I am not worried about lost IQ points (even though by saving the IQ points from being lost, we would then be directly improving the environments that lead to such a decrease). I am worried about the detrimental effects of these toxic chemicals on the developing minds of children; lost IQ points are an outcome of this effect. At best, IQ tests can track cognitive damage due to pollutants in these communities (Washington, 2019) but they do NOT “measure” intelligence. (Also note that lead consumption is associated with higher rates of crime so this is yet another reason to reduce the consumption of lead in these communities.)
Speaking of “measuring intelligence”, Washington (2019: 29) noted that Jensen (1969: 5) stated that while “intelligence” is hard to define, it can be measured… But how does that make any sense? How can you measure what you can’t define? (See arguments (i), (ii), and (iii) here.)
Big Lead, though, “actively encouraged landlords to rent to families with vulnerable young children by offering financial incentives” (Washington, 2019: 55). This was in reference to the researchers who studied the deleterious effects of lead consumption on developing humans. “The participation of a medical researcher, who is ethically and legally responsible for protecting human subjects, changes the scenario from a tragedy to an abusive situation. Moreover, this exposure was undertaken to enrich landlords and benefit researchers at the detriment of children” (Washington, 2019: 55). We realized that lead had deleterious effects on development as early as the 1800s (Rabin, 2008), but Big Lead pushed back:
[Lead Industries Association’s] vigorous “educational” campaign sought to rehabilitate lead’s image, muddying the waters by extolling the supposed virtues of lead over other building materials. It published flooding guides and dispatched expert lecturers to tutor architects, water authorities, plumbers, and federal officials in the science of how to repair and “safely” install lead pipes. All the while the [Lead Industries Association] staff published books and papers nd gave lectures to architects and water authorities that downplayed lead’s dangers. 11 (Washington, 2019: 60)
In any case, Washington’s book is a good read into the effects of toxic metals on brain development, and while we must do what we can to ameliorate the effects of these metals in low-income communities, IQ increases are a side effect of ameliorating the toxic metals in these communities.
Making Kids Cleverer
Didau (2019: 86) outright claims that “intelligence is measured by IQ tests”—he is outright pushing the hereditarian view that IQ tests “measure intelligence.” (A strange claim since on pg 95-96 he says that IQ tests are “a measure of relative intelligence.”)
In the book, Didau accepts many hereditarian premises—like the claim IQ tests measure intelligence, that heritability can partition genetic and environmental variation. Further, Didau says in the Acknowledgements (pg 11) that Ritchie’s (2015) Intelligence: All That Matters “forms the backbone for much of the information in Chapters 3 and 5.” So we can see here how the hereditarian IQ-ist stance colors his view on the relationship between “IQ” and “intelligence.” He also makes the bald claims that “intelligence is a good candidate for being the best researched and best understood characteristic of the human brain” and that it’s “also probably the most stable construct in all psychology” (pg 81).
Didau takes the view that intelligence is both a way to acquire knowledge as well as what type of knowledge we know (pg 83)—basically, it’s what we know and what we do with what we know along with ways to acquire said knowledge. What one knows is obviously a product of the environment they find themselves growing up in, and what we do with the knowledge we have is similarly down to environmental factors. Didau states that “Possibly the strongest correlations [with IQ] are those with educational outcomes” (pg 92). But Didau, it seems, fails to realize that this strong correlation is built into the test since IQ tests and scholastic achievement tests are different versions of the same test (Schwartz, 1975, Richardson, 2017).
In one of the “myths of intelligence” (Myth 3: Intelligence cannot be increased, pg 102) he discusses, Didau uses a similar analogy as myself. In an article on “the fade-out effect“, I argued that if one goes to the gym, works out and gets bigger and then stops going, we can then say that going to the gym is useless since once they leave the enriched environment they lose their gains. The direct parallels to Headstart, then, is clear with my gym/muscle-building analogy.
In another myth (Myth 4: IQ tests are unfair), Didau claims that if you get a low IQ score then you are probably unintelligent, while if you get a high one, it means you know the answers to the questions—which is obviously true. Of course, to know the answers to the questions (and to be able to reason the answers for some of the questions), one must be exposed to the knowledge that is contained in that test, or they won’t score high.
We can reject the use of IQ scores by racists, he says, who would use it to justify the superiority of their own groups and the inferiority of “the other”, all while not rejecting that IQ tests are valid (where have they been validated?). “Something real and meaningful” is being measured by these tests, and we have chosen to call this “intelligence” (pg 107). But we can say this about anything. Imagine having a test Y for X. But we don’t really know what X is, nor that Y really measures it. But because it accords with our a priori biases and since we have constructed Y to get the results we think we should see, even though we have no idea what X is, we assume that we are measuring what we set out to all without the basic requirements of measurement.
While Didau does seem to agree with some of the criticisms I’ve levied on IQ tests over the years (cross-cultural testing is pointless, IQ scores can be changed), he is, obviously, pushing a hereditarian IQ-ist agenda, cloaked as an environmentalist. He contradicts himself by saying that intelligence is measured by IQ tests without then saying what he says later about them—and I don’t think one should assume that he meant they are an “imperfect measure” of intelligence. (Imagine an imperfect measure of length—would we still be using it to build houses if it was only somewhat accurate?) Didau also agrees with the g theorists, in that there is a “general cognitive ability”, as well. He also agrees with Ritchie and Tucker-Drob (2018) and Ceci (1996) that schooling can and does increase IQ scores (as summer vacations show that IQ scores do decrease without schooling) (see Didau, 2018: Chapter 5). So while he does agree that IQ isn’t static and that education can and does increase it, he is still pushing a hereditarian IQ-ist model of “intelligence”—even though, as he admits, the concept of “intelligence” has yet to be satisfactorily defined.
Young Minds Wasted
In the last book, Young Minds Wasted (Schick, 2019), while he does dispense with many hereditarian myths (such as the myth of the normal distribution, see here), he still—through an environmentalist lens—justifies the claims that IQ tests test intelligence. While he masterfully dispenses with the “IQ is normally distributed” claim (see discussion in pg 180-186), the tagline of the book is “reducing poverty by increasing intelligence, in known ways.”
The poor’s intelligence is wasted, he says, by an intelligence-depressing environment. We can see the parallels here with Washington’s (2019) A Terrible Thing to Waste. Schick claims that “the single most important and widespread cause of poverty is the environmental constraints on intelligence” (pg 12, Schick’s emphasis). Now, like Washington, Schick says that a whole slew of chemicals and toxins decrease IQ (a truism) and by identity, intelligence. Of course, living in a deprived environment where one is exposed to different kinds of toxins and chemicals can retard brain development and lead to deleterious life outcomes down the line. But this fact does not mean that intelligence is being measured by these tests; it only shows that there are environments that can impede brain development which then is mirrored in a decrease in IQ scores.
Schick says that as intelligence increases, societal problems decrease. But, as I have argued at length, this is due to the way the tests themselves are constructed, involving the a priori biases of the test’s constructors. If we can construct a test with any kind of distribution we want to, and if the items emerge arbitrarily from the heads of the test’s constructors who then try them out on a standardized sample (Jensen, 1980: 71) looking for the results they want and assume a priori, then we can make it so that what we accept as truisms regarding the relationship between IQ and life events can be turned on their head, with no logical reason to accept one set of items over another, other than that one set has a bias in which it upholds a test constructor’s previously-held biases.
Schick does agree that “intelligent behavior” can change throughout life, based on one’s life experiences. But “Human intelligence is based on several genetically determined capabilities such as cognitive functions” (pg 39). He also claims that genetic factors determine while environmental factors influence cognitve functions, memory, and universal grammar.
Along with his acceptance that genetic factors can influence IQ scores and other aspects of the mind, he also champions heritability estimates as being able to partition genetic and environmental variation in traits (even though it can do no such thing; Moore and Shenk, 2016). He—uncritically—accepts the 80/20 genetic environmental heritability from Bouchard and the 60/40 genetic environmental heritability from Jensen and Murray and Herrnstein. These “estimates”—drawn mostly from family, twin, and adoption studies (Joseph, 2015)—though, are invalid due to the false assumptions the researchers hold, neverminding the conceptual difficulties with the concept of heritability (Moore and Shenk, 2016).
While Washington and Schick both make important points—that those who live in poor environments are at-risk of being exposed to certain things that disrupt their development—they both, along with Didau, accept the hereditarian claim that IQ tests are tests of intelligence. While each author has their own specific caveats (some of which I agree with, and other I do not), they keep the hereditarian claim alive by lending credence to their arguments, but not looking at it through a genetic lens.
While the authors have good intentions in mind and while the research they discuss is extremely important and interesting (like the effects of toxins and metals on the development of the brain and the development of the child), they—like their intellectual environmentalist ancestors—unwittingly lend credence to hereditarian claims that IQ tests measure intelligence but they go about the causes of individual and group differences in completely different ways. These authors, with their assertions, then, accept the claim that certain groups are less “intelligent” than others. But it’s not genes that are the cause—it’s the differences in environment that cause it. And while that claim is true—that the deleterious effects Washington and Schick discuss can and do retard normal development—it, in no way shape or form, means that “intelligence” is being measured.
Normal (brain) development is indeed a terrible thing to waste; we can teach kids more by exposing them to more things, and young minds are wasted by poverty. But by accepting these premises, one does not need to accept the hereditarian dogma that IQ tests are measures of some undefined thing with no theory. That poverty and the environments that those in poverty live in impedes normal brain development which is then reflected in IQ scores, it does not follow that these tests are “measuring” intelligence—they, at best, show environmental challenges that change the brain of the individual taking the test.
One needs to be careful with the language they use, lest they lend credence to hereditarian pseudoscience.
Ranking human worth on the basis of how well one compares in academic contests, with the effect that high ranks are associated with privilege, status, and power, does suggest that psychometry is best explored as a form of vertical classification and attending rankings of social value. (Garrison, 2009: 36)
Binet and Simon’s (1916) book The Development of Intelligence in Children is somewhat of a Bible for IQ-ists. The book chronicles the methods Binet and Simon used to construct their tests for children to identify those children who needed more help at school. In the book, they describe the anatomic measures they used. Indeed, before becoming a self-taught psychologist, Binet measured skulls and concluded that skull measurements did not correlate with teacher’s assessment of their students’ “intelligence” (Gould, 1995, chapter 5).
In any case, despite Binet’s protestations that Gould discusses, he wanted to use his tests to create what Binet and Simon (1916: 262) called an “ideal city.”
It now remains to explain the use of our measuring scale which we consider a standard of the child’s intelligence. Of what use is a measure of intelligence? Without doubt one could conceive many possible applications of the process, in dreaming of a future where the social sphere would be better organized than ours; where every one would work according to his own aptitudes in such a way that no particle force should be lost for society. That would be the ideal city. It is indeed far from us. But we have to remain among the sterner and matter-of-fact realities of life, since we here deal with practical experiments which are the most commonplace realities.
Binet disregarded his skull measurements as a correlate of ‘intelligence’ since they did not agree with teacher’s ratings. But then Binet and Simon (1916: 309) discuss how teachers assessed students (and gave an example). This is then how Binet made sure that the new psychological ‘measure’ that he devised related to how teachers assessed their students. Binet and Simon’s “theory” grouped certain children as “superior” and others as “inferior” in ‘intelligence’ (whatever that is), but did not pinpoint biology as the cause of the differences between the children. These groupings, though, corresponded to the social class of the children.
Thus, in effect, what Binet and Simon wanted to do was to organize society along a system of class social class lines while using his ‘intelligence tests’ to place the individual where they “belonged” on the hierarchy on the basis of their “intelligence”—whether or not this “intelligence” was “innate” or “learned.” Indeed, Binet and Simon did originally develop their scales to distinguish children who needed more help in school than others. They assumed that individuals had certain (intellectual) properties which then related to their class position. And that by using their scales, they can identify certain children and then place them into certain classes for remedial help. But a closer reading of Binet and Simon shows two hereditarians who wanted to use their tests for similar reasons that they were originally brought to America for!
Binet and Simon’s test was created to “separate natural intelligence and instruction” since they attempted to ‘measure’ the “natural intelligence” (Mensh and Mensh, 1991). Mensh and Mensh (1991: 23) continue:
Although Binet’s original aim was to construct an instrument for classifying unsuccessful school performers inferior in intelligence, it was impossible for him to create one that would do only that, i.e., function at only one extreme. Because his test was a projection of the relationship between concepts of inferiority and superiority—each of which requires the other—it was intrinsically a device for universal ranking according to alleged mental worth.
This “ideal city” that Binet and Simon imagine would have individuals work to their “known aptitudes”—meaning that individuals would work where their social class dictated they would work. This was, in fact, eerily similar to the uses of the test that Goddard translated and the test—the Stanford-Binet—that Terman developed in 1916.
Binet and Simon (1916: 92) also discuss further uses for their tests, irrespective of job placement for individuals:
When the work, which is here only begun, shall have taken its definite character, it will doubtless permit the solution of many pending questions, since we are aiming at nothing less than the measure of intelligence; one will this know how to compare the different intellectual levels not only according to age, but according to sex, social condition, and to race; applications of our method will be found useful to normal anthropology, and also to criminal anthropology, which touches closely upon the study of the subnormal, and will receive the principle conclusion of our study.
Binet, therefore, had similar views to Goddard and Terman, regarding “tests of intelligence” and Binet wanted to stratify society by ‘intelligence’ using his own tests (which were culturally biased against certain classes). Binet’s writings on the uses of his tests, ironically, mirrored what the creators of the Army Alpha and Beta tests believed. Binet believed that his tests could select individuals that were right for the role they would be designated to work. Binet, nevertheless, contradicted himself numerous times (Spring, 1972; Mensh and Mensh, 1991).
This dream of an “ideal city” was taken a step further when Binet’s test was brought and translated to America by Goddard and used for selecting military recruits (call it an “ideal country”). They would construct the test in order to “ensure” the right percentages of “the right” people who would be in their spot that was designated to them on the basis of their intelligence.
What Binet was attempting to do was to mark individual social value with his test. He claimed that we can use his (practical) test to select people for certain social roles. Thus, Binet’s dream for what his tests would do—and were then further developed by Goddard, Yerkes, Terman, et al—is inherent in what the IQ-ists of today want to do. They believe that there are “IQ cutoffs”, meaning that people with an IQ above or below a certain threshold won’t be able to do job X. However, the causal efficacy of IQ is what is in question along with the fact that IQ-ists have certain biases that they construct into their tests that they believe are ‘objective.’ But where Binet shifted from the IQ-ists of today and his contemporaries was that he believed that ‘intelligence’ is relative to one’s social situation (Binet and Simon, 1916: 266-267).
It is ironic that Gould believed that we could use Binet’s test (along with contemporary tests constructed and ‘validated’—correlated—with Terman’s Stanford-Binet test) for ‘good’; this is what Binet thought he would be done. But then, when the hereditarians had Binet’s test, they took Binet’s arguments to a logical conclusion. This also has to do with the fact that the test was constructed AND THEN they attempted to ‘see’ what was ‘measured’ with correlational studies. The ‘meaning’ of test scores, thusly, is seen after the fact with—wait for it—correlations with other tests that were ‘validated’ with other (unvalidated) tests.
This comes back to the claim that the mental can be ‘measured’ at all. If physicalism is false—and there are dozens of (a priori) arguments that establish this fact— and the mental is therefore irreducible to the physical, then psychological traits—and with it the mind—cannot be measured. It then follows that the mind cannot be measured. Further, rankings are not measures (Nash, 1990: 63), therefore, ability and achievement tests cannot be ‘measures’ of any property of individuals or groups—the object of measurement is the human and this was inherent in Binet’s original conception of his test that the IQ-ists in America attempted with their restrictions on immigration in the early 1900s.
This speaks to the fatalism that is inherent in IQ-ism—and was inherent since the creation of the first standardized tests (of which IQ tests are). These tests are—and have been since their inception—attempting to measure human worth and the differences and value between persons. The IQ-ist claims that “IQ tests must measure something.” And this ‘measurement’, it is claimed, is inherent in the fact that the tests have ‘predictive validity.’ But such claims of that a ‘property’ inherent in individuals and groups fails. The real ‘function’ of standardized testing is for assessment, and not measurement.
The “ideal city”, it seems, is just a city of IQ-ism—where one’s social roles are delegated by where they score on a test that is constructed to get the results the constructors want. Therefore, what Binet wanted his tests to do was (and some may ever argue it still is) being used to mark social worth (Garrison, 2004, 2009). Psychometry is therefore a political ring. It is inherently political and not “value-free.” Psychologists/psychometricians do not have an ‘objective science’, as the object of study (the human) can reflexively change their behavior when they know they are being studied. Their field is inherently political and they mark individuals and groups—whether they admit it or not. “Ideal cities” can lead to eugenic thinking, in any case, and to strive for “ideality” can lead to social harms—even if the intentions are ‘good.’
I started this blog almost 5 years ago. Currently (excluding this one), there are 480 articles on this blog. Searching my blog name “notpoliticallycorrect.me” on Google Scholar leads to two citations—one on “IQ” and obesity and the other on inclusionism about race when it comes to medicine. These two cites pretty much perfectly show my views and their change in the past 5 years since the creation of this blog. I will discuss both papers that cited me in turn.
In the journal Social and Human Sciences. Domestic and Foreign Literature (a sociology journal), a 2016 article I published (back in my “HBD” days titled “Race, Obesity, Poverty, and IQ, writing:
income and education (which in the latter case presumably correlates with IQ levels). They have the highest prevalence of type 2 diabetes. In terms of ethnicity, overweight indicators are as follows: 67.3% for whites, 75.6% for African Americans and 77.9% for Latinos. Summing up all this, we obtain, in the words of the authors of the study, “politically incorrect conclusions”: African Americans and Hispanics are more at risk of living in poverty, have lower IQ, higher rates of obesity and a chance of developing diabetes; The main factor in these correlations is the IQ level (Race, obesity, poverty and IQ, 2016).
Almost four years later (after my views have undergone a significant change) I would draw different conclusions. Blacks are 51% more likely to be obese than whites (Lincoln, Abdou, and Lloyd, 2016) with the cause being a multitude of factors. Though it seems that black American men with more African ancestry may be protected against central adiposity (Klimentidis et al, 2016). Racial disparities in obesity are due to an interaction of a multitude of factors (Byrd, Toth, and Stanford, 2018). Interestingly, black kids with obesity don’t perceive themselves as obese (Lankarani and Assani, 2018), which, presumably, is due to higher rates of obesity in the black population. Black girls are more likely to have an earlier menarche than white giris (e.g., Freedman et al, 2000) and it is because black girls are more likely to be obese than white girls which is due to the effects of leptin being permissive for menarche, from the higher levels of body fat in black girls (Salsberry, Reagen, and Pajer, 2010).
We must look to social determinants of health to understand why certain non-white populations are more likely to be obese than others. Looking at “IQ” as causal for obesity—which I used to believe—obscures much more than it helps. We can look to epigenetic effects, for example, regarding biological explanations of obesity (Krueger and Reithner, 2016), for instance high BMI in black women being related to saliva-based DNA methylation, which is used as a marker for aging (Li et al, 2019). Even perceived racism (it does not have to be actual) can have physiologic effects on black women, heigtening cortisol levels, leading to a heigtened obesity risk (Mwendwa et al, 2016).
In any case, it’s cool that I got cited but uncool that it was something that I don’t believe anymore.
The second citation comes from Rossi (2020: 13) in the journal Social Science Information titled New avenues in epigenetic research about race: Online activism around reparations for slavery in the United States citing my article Race, Medicine, and Epigenetics: How the Social Becomes Biological:
Consequently, social scientists’ opinions about epigenetic research dealing with race and slavery have sometimes been scrutinized by blog authors. For example, the article untitled [sic] ‘Race, medicine, and epigenetics: How the social becomes biological’ published in 2019 on the blog Notpoliticallycorrect features a long discussion on whether race could be seen as a viable variable to discuss the epigenetics of trauma, especially relating to slavery in the US.14 After summarizing the views of legal scholar and sociologist Dorothy Roberts, who has argued repeatedly in her works against the use of the concept of race in biomedical sciences, the author sides with philosophers Michael Hardimon and Shannon Sullivan, who are both enthusiastic about the inclusion of race to discuss genetics and epigenetics:
Race and medicine is a tendentious topic. On one hand, you have people like sociologist Dorothy Roberts (2012) who argues against the use of race in a medical context, whereas philosopher of race Michael Hardimon thinks that we should not be exclusionists about race when it comes to medicine. If there are biological races, and there are salient genetic differences between them, then why should we disregard this when it comes to a medically relevant context? [. . .] So, we should not be exclusionists (like Roberts), we should be inclusionists (like Hardimon). [. . .] Furthermore, acknowledging the fact that the social dimensions of race can help us understand how racism manifests itself in biology (for a good intro to this see Sullivan’s (2015) book The Physiology of Racist and Sexist Oppression, for even if the ‘oppression’ is imagined, it can still have very real biological effects that could be passed onto the next generation – and it could particularly affect a developing fetus, too). It seems that there is a good argument that the effects of slavery could have been passed down through the generations manifesting itself in smaller bodies.
Relying also on Jasienska’s research, the author of this blog post therefore dismissed the idea that race should not be applied to the medical field, while using the words and legitimacy of humanities scholars such as Hardimon and Sullivan to back up their claims. These contributions show the way journalists and various blog authors write about epigenetics by mixing together scientific articles in various fields (the social sciences, philosophy, psychiatry, social work) in an effort to bring more legitimacy to the topic. This process highlights the ways in which lay circles produce new connections between various papers and texts dealing with epigenetics, no matter how different their fields of expertise may be.
This shows a very sharp contrast with my current views and my older views on race and obesity. Before, thinking that obesity was “determined” by IQ (e.g., Kanazawa, 2012; Kanazawa, 2014) was an error—people with low “IQs” are more likely to be in poverty and have less access to good foods, along with the abundance of fast food restaurants in areas with a higher concentration of blacks (James et al, 2014). Black women, for instance, have a lower RMR than white women (Gannon, DiPietro, and Poehlman, 2000)
These two articles of mine that were cited (on similar issues, no less) show the evolution of my views over the past four or so years in between the publication of the two articles on this blog. This is a good case study on how the one can view the aetiology of one thing completely different based on the types of views they previously held. The views of obesity and race I hold now are much more complex than the reductive “it’s genes/IQ” kind of guy that I used to be. A more holistic view of obesity disparities, factoring in access to food (food swamps/deserts), income, location etc is more informative than looking just to “IQ” or “genes for” obesity—because even if “genes for” obesity exist and even if “genes for” obesity are distributed unevenly across races, the predominant determinant of weight will be activity level/caloric consumption, which is based on SES and other factors—not “IQ” or “obesity genes.” The social does become biological, and it does have consequences for obesity disparities between and within races.
The other day on Twitter, Davide Piffer made the claim that North and South Italians are “two different races” and that the North is “governed by morons from the South.” What would make him say that North and South Italians “are two different races”? Well, a new study was just published which looked into the genetic divergence of North and South Italians. It seems that Piffer is saying that the fact that North and South Italians are genetically distinct means that they are races. But this is an error in reasoning—it is fallacious to believe that just because two groups are genetically distinct that they are therefore races.
Sazzini et al (2020) show evidence that North and South Italians genetically diverged after the last glacial maximum (LGM). They state that there was “adaptive evolution” at “insulin-related loci” from Italian regions with temperate climates. The state that climatic factors differentiated those from the North and those from the South. The “adaptations” that those in the North have protect them from:
… we proposed climate-related selective pressures as potential factors having influenced adaptive evolution at insulin-related genes especially in the ancestors of Northern Italians. By regulating glucose homeostasis, adiposity, and thermogenesis in response to high-calorie diets adopted to cope with energetically demanding environmental conditions, these adaptive events might have also contributed to make people from Northern Italy less prone to develop T2D and obesity despite the challenging nutritional context imposed by modern lifestyles. Conversely, possible adaptations against pathogens and modulation of melanogenesis in response to high UV radiation are supposed to have played a role in reduced susceptibility of people from Southern Italy respectively to immunoglobulin-A nephropathy and skin cancers. Finally, multiple adaptive processes evolved by the overall Italian population, but having resulted more pronounced in people from the southern regions of the peninsula, were found to have the potential to secondarily modulate the longevity phenotype. Therefore, by pinpointing genetic determinants underlying biological adaptation of Italian population clusters in response to locally diverging environmental contexts, the present study succeeded in disclosing also valuable biomedical implications of such evolutionary events.
What they did was select 39 unrelated genomes, representative of the known genetic differences in Italian the Italian population, and then compare the differences 35 populations from all over Europe. They found divergence between the two occurred between 12 and 19 kya—they presume that the so-called “adaptations” for North Italians, being “adapted” to lower temperatures and higher-kcal food, and the so-called “adaptations” for South Italians being adapted to warmer climes, so they have “genes to protect against” skin cancer and pathogens—while gene variants ‘related’ to longer life were also showed changes in those genes.
The press release, though, cautions against adaptive conclusions:
The authors caution that although correlations may be drawn between evolutionary adaptations and current disease prevalence among populations, they are unable to prove causation, or rule out the possibility that more recent gene flow from populations exposed to diverse environmental conditions outside of Italy may have also contributed to the different genetic signatures seen between northern and southern Italians today.
While this is an interesting study (and it does need to reign back its ‘adaptive conclusions’), it does not show that North and South Italians are different races. If they are different races, how does it go? Is there a single North Italian race and a single South Italian race? Or are North Italians Caucasian, while South Italians would be African? Are there 5, 6, or 7 races in Piffer’s racial schema?
Like all hereditarians, he just assumes the existence of race—if this and that population are genetically distinct, then they must be races. Wow, how compelling an argument to show that races exist. But if North and South Italians are a different race on the basis of genetic differentiation, then so are East and West Germans (Nelis et al, 2009), North and South Germans (Heath et al, 2008), Southeast and Northwest Dutch (Lao et al, 2013), North and South Dutch (Byrne et al, 2020), Northern and Southern Swedes (Humphreys et al, 2011), East and West Fins (Kerminen et al, 2017), etc. Using genetic differentiation as a basis to show which population is or is not a race logically leads one down this path. Why not 7 billion races? Each individual is unique? Oh, wait: He would say something about “breeding populations” probably—and that’d be good because he would then be stating conditions for racehood, not just assuming their existence on the basis of genetic differentiation. Though, the claim would still fail.
Piffer has let his mask slip before—back in March he called immigrants to Italy “gorillas”, then saying that “Gorillas are nobler” because they would not take beds from the sick, since this was when Corona was really heating up in Italy. This is similar to what the “World’s Smartest Man” Christopher Langan said about gorillas and immigration. There seems to be a relationship between idiotic sayings about gorillas and immigration and racism… hmm…
In any case, the fact that North and South Italians are genetically distinct populations in no way, shape, or form, is evidence that they are different races. For if it is, then there are many, many races—even in countries with the same group of people, if we are to understand race how Piffer seems to understand it (any type of genomic differentiation between populations makes them races). So is each family on earth a different race? This is the kind of conclusion that Piffer’s lazy thinking leads to. Piffer is just like Murray—if populations cluster in genomic analyses then those population clusters are races. Two hereditarians—two assumptions that fail, since if we take them to their logical conclusion, there are more races than is traditionally stated. Piffer, it seems, just sees a group he is clearly biased agains (South Italians), sees they are distinct genomically from the North, and then says “Aha! these morons from the South who are governing us are just a different race than we are!” Clinal differences in skin color, too, don’t ‘prove’ that North and South Italians are a different race.
Too bad for Piffer, reality is different than in his own biased world. Italy is over two thousand years old—and the people in the North and the South belong to the same race. Piffer’s ‘research’ into the “IQs” of North and South Italians (Lynn, 2010; Piffer and Lynn, 2014; see Cornoldi et al, 2010; D’Amico et al, 2011; Robinson, Saggino, and Tommasi, 2011; Danielle and Malanima, 2011; Cornoldi, Giofre, and Martini, 2013; in any case, is (and has been) suspect—but now we know that he has other motivations than just iScience!
(Note: The Italianthro blog has a ton of information on Italy, its peopling, “IQ”, and other things. Check the blog out.)
The East Asian race has been held up as what a high “IQ” population can do and, along with the correlation between IQ and standardized testing, “HBDers” claim that this is proof that East Asians are more “intelligent” than Europeans and Africans. Lynn (2006: 114) states that the average IQ of China is 103. There are many problems with such a claim, though. Not least because of the many reports of Chinese cheating on standardized tests. East Asians are claimed to be “genetically superior” to other races as regards IQ, but this claim fails.
Chinese IQ and cheating
Differences in IQ scores have been noted all over China (Lynn and Cheng, 2013), but generally, the consensus is, as a country, that Chinese IQ is 105 while in Singapore and Hong Kong it is 103 and 107 respectively (Lynn, 2006: 118). To explain the patterns of racial IQ scores, Lynn has proposed the Cold Winters theory (of which a considerable response has been mounted against it) which proposes that the harshness of the environment in the ice age selected-for higher ‘general intelligence’ in East Asian and European populations; such a hypothesis is valid to hereditarians since East Asian (“Mongoloids” as Lynn and Rushton call them) consistently score higher on IQ tests than Europeans (eg Lynn and Dzobion, 1979; Lynn, 1991; Herrnstein and Murray, 1994). In a recent editorial in Psych, Lynn (2019) criticizes this claim from Flynn (2019):
While northern Chinese may have been north of the Himalayas during the last Ice Age, the southern Chinese took a coastal route from Africa to China. They went along the Southern coast of the Middle East, India, and Southeast Asia before they arrived at the Yangzi. They never were subject to extreme cold.
In response, Lynn cites Frost’s (2019) article where he claims that “mean intelligence seems to have risen during recorded history at temperate latitudes in Europe and East Asia.” Just-so storytelling about how and why such “abilities” were “selected-for”, the Chinese score higher on standardized tests than whites and blacks, and this deserves an explanation (the Cold Winters Theory fails; it’s a just-so story).
Before continuing, something must be noted about Lynn and his Chinese IQ data. Lynn ignores numerous studies on Chinese IQ—Lynn would presumably say that he wants to test those in good conditions and so disregards those parts of China with bad environmental conditions (as he did with African IQs). Here is a collection of forty studies that Lynn did not refer to—some showing that, even in regions in China with optimum living conditions, IQs below 90 are found (Qian et al, 2005). How could Lynn miss so many of these studies if he has been reading into the matter and, presumably, keeping up with the latest findings in the field? The only answer to the question is that Richard Lynn is dishonest. (I can see PumpkinPerson claiming that “Lynn is old! It’s hard to search through and read every study!” to defend this.)
Although the Chinese are currently trying to stop cheating on standardized testing (even a possible seven-year prison sentence, if caught cheating, does not deter cheating), cheating on standardized tests in China and by the Chinese in America is rampant. The following is but a sample of what could be found doing a cursory search on the matter.
One of the most popular ways of cheating on standardized tests is to have another person take the exam for you—which is rampant in China. In one story, as reported by The Atlantic, students can hire “gunmen” to sit-in on tests for them, though measures are being taken to fight back against that such as voice recognition and finger-printing. It is well-known that much of the cheating on such tests are being done by international students.
Even on the PISA—which is used as an “IQ” proxy since they correlate highly (.89) (Lynn and Mikk, 2009)—though, there is cheating. For the PISA, each country is to select, at random, 5,000 of their 15-year-old children around the country and administer the PISA—they chose their biggest provinces which are packed with universities. Further, score flucuations attract attention which indicates dishonesty. In 2000, more than 2000 people protested outside of a university to protest a new law which banned cheating on tests.
The rift amounted to this: Metal detectors had been installed in schools to route out students carrying hearing or transmitting devices. More invigilators were hired to monitor the college entrance exam and patrol campus for people transmitting answers to students. Female students were patted down. In response, angry parents and students championed their right to cheat. Not cheating, they said, would put them at a disadvantage in a country where student cheating has become standard practice. “We want fairness. There is no fairness if you do not let us cheat,” they chanted. (Chinese students and their parents fight for the right to cheat)
Surely, with rampant cheating on standardized tests in China (and for Chinese Americans), we can trust the Chinese IQ numbers in light of the news that there is a culture of cheating on tests in China and in America.
“Genetic superiority” and immigrant hyper-selectivity
Strangely, some proponents of the concept of “genetic superiority” and “progressive evolution” still exist. PumpkinPerson is one of those proponents, writing articles with titles like “Genetically superior: Are East Asians more socially intelligent too?, More evidence that East Asians are genetically superior, Oriental populations: Genetically superior, even referring to a fictional character on a TV show as a “genetic superior.” Such fantastical delusions come from Rushton’s ridiculous claim that evolution may be progressive and that some populations are, therefore, “more evolved” than others:
One theoretical possibility is that evolution is progressive and that some populations are more “advanced” than others. Rushton, 1992
Such notions of “evolutionary progress” and “superiority“—even back in my “HBD” days—never passed the smell test to me. In any case, how can East Asians be said to be “genetically superior”? What do “superior genes” or a “superior genome” look like? This has been outright stated by, for example, Lynn (1977) who prolcaims—for the Japanese—that his “findings indicate a genuine superiority of the Japanese in general intelligence.” This claim, though, is refuted by the empirical data—what explains East Asian educational achievement is not “superior genes”, but the belief that education is paramount for upward social mobility, and so, to preempt discrimination, this would then be why East Asians overperform in school (Sue and Okazaki, 1990).
Furthermore, the academic achievement of Asian cannot be reduced to Asian culture—the fact that they are hyper-selected is why social class matters less for Asian Americans (Lee and Zhou, 2017).
These counterfactuals illustrate that there is nothing essential about Chinese or Asian culture that promotes exceptional educational outcomes, but, rather, is the result of a circular process unique to Asian immigrants in the United States. Asian immigrants to the United States are hyper-selected, which results in the transmission and recreation of middle-class specific cultural frames, institutions, and practices, including a strict success frame as well as an ethnic system of supplementary education to support the success frame for the second generation. Moreover, because of the hyper-selectivity of East Asian immigrants and the racialisation of Asians in the United States, stereotypes of Asian-American students are positive, leading to ‘stereotype promise’, which also boosts academic outcomes
Inequalities reproduce at both ends of the educational spectrum. Some students are assumed to be low-achievers and undeserving, tracked into remedial classes, and then ‘prove’ their low achievement. On the other hand, others are assumed to be high-achievers and deserving of meeting their potential (regardless of actual performance); they are tracked into high-level classes, offered help with their coursework, encouraged to set their sights on the most competitive four-year universities, and then rise to the occasion, thus ‘proving’ the initial presumption of their ability. These are the spill-over effects and social psychological consequences of the hyper-selectivity of contemporary Asian immigration to the United States. Combined with the direct effects, these explain why class matters less for Asian-Americans and help to produce exceptional academic outcomes. (Lee and Zhou, 2017)
The success of second-generation Chinese Americans has, too, been held up as more evidence that the Chinese are ‘superior’ in their mental abilities—being deemed ‘model minorities’ in America. However, in Spain, the story is different. First- and second-generation Chinese immigrants score lower than the native Spanish population on standardized tests. The ‘types’ of immigrants that have emigrated has been forwarded as an explanation for why there are differences in attainments of Asian populations. For example, Yiu (2013: 574) writes:
Yet, on the other side of the Atlantic, a strikingly different story about Chinese immigrants and their offspring – a vastly understudied group – emerges. Findings from this study show that Chinese youth in Spain have substantially lower educational ambitions and attainment than youth from every other nationality. This is corroborated by recently published statistics which show that only 20 percent of Chinese youth are enrolled in post-compulsory secondary education, the prerequisite level of schooling for university education, compared to 40 percent of the entire adolescent population and 30 percent of the immigrant youth population in Catalonia, a major immigrant destination in Spain (Generalitat de Catalunyan, 2010).
… but results from this study show that compositional differences across immigrant groups by class origins and education backgrounds, while substantial, do not fully account for why some groups have higher ambitions than others. Moreover, existing studies have pointed out that even among Chinese American youth from humble, working-class origins, their drive for academic success is still strong, most likely due to their parents’ and even co-ethnic communities’ high expectations for them (e.g., Kao, 1995; Louie, 2004; Kasinitz et al., 2008).
The Chinese in Spain believe that education is a closed opportunity and so, they allocate their energy elsewhere—into entrepreneurship (Yiu, 2013). So, instead of Asian parents pushing for education, they push for entrepreneurship. What this shows is that what the Chinese do is based on context and how they perceive how they will be looked at in the society that they emigrate to. US-born Chinese immigrants are shuttled toward higher education whereas in the Netherlands, the second-generation Chinese have lower educational attainment and the differences come down to national context (Noam, 2014). The Chinese in the U.S. are hyper-selected whereas the Chinese in Spain are not and this shows—the Chinese in the US have a high educational attainment whereas they have a low educational attainment in Spain and the Netherlands—in fact, the Chinese in Spain show lower educational attainment than other ethnic groups (Central Americans, Dominicans, Morrocans; Lee and Zhou, 2017: 2236) which, to Americans would be seen as a surprise
Second-generation Chinese parents match their intergenerational transmission of their ethnocultural emphasis on education to the needs of their national surroundings, which, naturally, affects their third-generation children differently. In the U.S., adaptation implies that parents accept the part of their ethnoculture that stresses educational achievement. (Noam, 2014: 53)
So what explains the higher educational attainment of Asians? A mixture of culture and immigrant (hyper-) selectivity along with the belief that education is paramount for upward mobility (Sue and Okazaki, 1990; Hsin and Xie, 2014; Lee and Zhou, 2017) and the fact that what a Chinese immigrant chooses to do is based on national context (Noam, 2014; Lee and Zhou, 2017). Poor Asians do indeed perform better on scholastic achievement tests than poor whites and poor ‘Hispanics’ (Hsin and Xie, 2014; Liu and Xie, 2016). Teachers even favor Asian American students, perceiving them to be brighter than other students. But what are assumed to be cultural values are actually class values which is due to the hyper-selectivity of Asian immigrants to America (Hsin, 2016).
The fact that the term “Mongoloid idiot” was coined for those with Down syndrome because they looked Asian is very telling (see Hilliard, 2012 for discussion). But, the IQ-ists switched from talking about Caucasian superiority to Asian superiority right as the East began their economic boom (Liberman, 2001). The fact that there were disparate “estimates” of skulls in these centuries points to the fact such “scientific observations” are painted with a cultural brush. See eg table 1 from Lieberman (2001):
This tells us, again, that our “scientific objectivity” is clouded by political and economic prejudices of the time. This allows Rushton to proclaim “If my work was motivated by racism, why would I want Asians to have bigger brains than whites?” Indeed, what a good question. The answer is that the whole point of “HBD race realism” is to denigrate blacks, so as long as whites are above blacks in their little self-made “hierarchy” no such problem exists for them (Hilliard, 2012).
Note how Rushton’s long debunked- r/K selection theory (Anderson, 1991; Graves, 2002) used the current hierarchy and placed dozens of traits on a hierarchy where it was M > C > N (Mongoloids, Caucasoids, and Negroids respectively, to use Rushton’s outdated terminology). It is a political statement to put the ‘Mongoloids’ at the top of the racial hierarchy; the goal of ‘HBD’ is to denigrate blacks. But, do note that in the late 19th to early 20th century that East Asians were deemed to have small brains, large penises, and that Japanese men, for instance, would “debauch their [white] female classmates” (quoted in Hilliard, 2012: 91).
The “IQ” of China (along with scores on other standardized tests such as TIMMS and PISA), in light of the scandals occurring regarding standardized testing should be suspect. Richard Lynn has failed to report dozens of studies that show low IQ scores for China, thusly inflating their scores. This is, yet again, another nail in the coffin for the ‘Cold Winter Theory’, since the story is formulated on the basis of cherry-picked IQ scores of children. I have noted that if we have different assumptions that we would have different evolutionary stories. Thus, if the other data were provided and, say, Chinese IQ were found to be lower, we would just create a story to justify the score. This is illustrated wonderfully by Flynn (2019):
I will only say that I am suspicious of these because none of us can go back and really evaluate environment and mating patterns. Given free reign, I can supply an evolutionary scenario for almost any pattern of current IQ scores. If blacks had a mean IQ above other races I could posit something like this: they benefitted from exposure to the most rigorous environmental conditions possible, namely, competition from other people. Thanks to greater population pressures on resources, blacks would have benefitted more from this than any of those who left at least for a long time. Those who left eventually became Europeans and East Asians.
The hereditarians point to the academic success of East Asians in America as proof that IQ tests ‘measure’ intelligence, but East Asians in America are a hyper-selected sample. As the references I have provided show, second-generation Chinese immigrants show lower educational attainments than other ethnies (the opposite is true in America) and this is explained by the context that the immigrant family finds themselves in—where do you allocate your energy? Education or entrepreneurship? Such choices seem to be class-based due to the fact education is championed by the Chinese in America and not in Spain and the Netherlands—then dictate, and they also refute any claims of ‘genetic superiority’—they also refute, for that matter, the claim that genes matter for educational attainment (and therefore IQ)—although we did not need to know this to know that IQ is a bunk ‘measure’.
So if Chinese cheat on standardized tests, then we should not accept their IQ scores; the fact that they, for example, provide non-random children from large provinces speaks to their dishonesty. They are like Lynn, in a way, avoiding the evidence that IQ scores are not what they seem—both Lynn and the Chinese government are dishonest cherry-pickers. The ‘fact’ that East Asian educational attainment can be attributed to genes is false; it is attributed to hyper-selectivity and notions of class and what constitutes ‘success’ in the country they emigrate to—so what they attempt is based on (environmental) context.
In a conversation with an IQ-ist, one may eventually find themselves discussing the concept of “superiority” or “inferiority” as it Regards IQ. The IQ-ist may say that only critics of the concept of IQ place any sort of value-judgments on the number one gets when they take an IQ test. But if the IQ-ist says this, then they are showing their ignorance regarding the history of the concept of IQ. The concept was, in fact, formulated to show who was more “intelligent”—“superior”—and who was less “intelligent”—“inferior.” But here is the thing, though: The terms “superior” and “inferior” are, however, anatomic which shows the folly of the attempted appropriation of the term.
Superiority and inferiority
If one wants to find early IQ-ists talking about superiority and inferiority regarding IQ, they would only need to check out Lewis Terman’s very first Stanford-Binet tests. His scales—now in their fifth edition—state that IQs between 120 and 129 are “superior” while 130-144 is “gifted or very advanced” and 145-160 is “very gifted” or “highly advanced.” How strange… But, the IQ-ist can say that they were just products of their time and that no serious researcher believes such foolish things, that one is “superior” to another on the basis of an IQ score. What about proximal IQs? Lateral IQs? Posterior IQs? Distal IQs? It’s ridiculous to use anatomic terminology (for physical things) and attempt to use them to describe mental “things.”
But, perhaps the most famous hereditarian Arthur Jensen, as I have noted, wrongly stated that heritability estimates can be used to estimate one’s “genetic standing” (Jensen, 1970) and that if we continue our current welfare policies then we are in danger of creating a “genetic underclass” (Jensen, 1969). This, as does the creation of the concept of IQ in the early 1900s, speaks to the hereditarian agenda and the reason for the IQ enterprise as a whole. (See Taylor, 1980 for a wonderful discussion of Jensen’s confusion on the concept of heritability.)
This is no surprise when you understand that IQ tests were created to rank people on a mental hierarchy that reflected the current social hierarchy of the time which would then be used as justification for their spot on the social hierarchy (Mensh and Mensh, 1991). So it is no surprise that anatomic terminology was hijacked in an attempt at forwarding eugenic ideas. But the eugenicists concept of superiority didn’t always pan out the way they wanted it to go, which is evidenced a few decades before the conceptualization of standardized testing.
Galton attempted to show that those with the fastest reaction times were more intelligent, but when he found out that the common man had just as quick of a reaction time, he abandoned this test. Then Cattell came along and showed that no relationship existed between sensory perception and IQ scores. Finally, Binet showed that measures of the skull did not correspond with teacher’s assessment of who is or is not “intelligent.” Then, some decades later, Binet and Simon finally construct a test that discriminates between who they feel is or is not intelligent—which discriminated by social class. This test was finally the “measure” that would differentiate between social classes since it was based on a priori notions of an individual’s place in the social hierarchy (Garrison, 2009: 75). Binet and Simon’s “ideal city” would use test scores as a basis to shuttle people into occupations they “should be” in on the basis of their IQ scores which would show how they would work based on their “aptitudes” (Mensh and Mensh, 1991: 24; Garrison, 2009: 79). Bazemore-James, Shinaorayoon, and Martin (2017) write that:
The difference in racial subgroup mean scores mimics the intended outcomes of the original standardized IQ tests, with exception to Asian Americans. Such tests were invented in the 1910s to demonstrate the superiority of rich, U.S.-born, White men of northern European Descent over non-Whites and recent immigrants (Gersh, 1987). By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p. 166) populationiot “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656).
So, as one can see, this “superiority” was baked-in to IQ tests from the very start and the value-judgments, then, are not in the minds of IQ critics but is inherent in the scores themselves as stated by the pioneers of IQ testing in America and the originators of the concept that would become IQ. Garrison (2009: 79) writes:
With this understanding it is possible to make sense of Binet’s thinking on intelligence tests as group differentiation. That is, the goal was to group children as intelligent and unintelligent, and to grade (value) the various levels of the unintelligent (also see Wolf 1973, 152–154). From the point of view of this goal, it mattered little whether such differences were primarily biological or environmental in origin. The genius of the theory rests in how it postulates one group as “naturally” superior to the other without the assumptions of biology, for reason had already been established as a natural basis for distinction, irrespective of the origin of differences in reasoning ability.
While Binet and Simon were agnostic on the nature-nurture debate, the test items that they most liked were those items that differentiated between social classes the most (which means they were consciously chosen for those goals). But reading about their “ideal city”, we can see that those who have higher test scores are “superior” to those who do not. They were operating under the assumption that they would be organizing society along class lines with the tests being measures of group mental ability. For Binet and Simon, it does not matter whether or not the “intelligence he sought to define” was inherited or acquired, they just assumed that it was a property of groups. So, in effect, “Binet and Simon developed a standard whereby the value of people’s thinking could be judged in a standard way, in a way that corresponded with the exigencies of social reproduction at that time” (Garrison, 2009: 94). The only thing such tests do is reproduce the differences they claim to measure—making it circular (Au, 2009).
But the whole reason why Binet and Simon developed their test was to rank people from “best” to “worst”, “good” to “bad.” But, this does not mean that there is some “thing” inherent in individuals or groups that is being “measured” (Nash, 1990). Thus, since their inception, IQ tests (and by proxy all standardized testing) has pronouncements of such ranking built-in, even if it is not explicitly stated today. Such “measures” are not scientific and psychometrics is then shown for what it really is: “best understood as the development of tools for vertical classification and the production of social value” (Garrison, 2009: 5).
The goal, then, of psychometry is clear. Garrison (2009: 12) writes:
Ranking human worth on the basis of how well one competes in academic contests, with the effect that high ranks are associated with privilege, status, and power, suggests that psychometry is premised, not on knowledge of intellectual or emotional development, but on Anglo-American political ideals of rule by the best (most virtuous) and the brightest (most talented), a “natural aristocracy” in Jeffersonian parlance.
But, such notions of superiority and inferiority, as I have stated back in 2018, are nonsense when taken out of anatomic context:
It should be noted that the terms “superior” and “inferior” are nonsensical, when used outside of their anatomic contexts.
An IQ-ist may exclaim “Are you saying that you can’t say that person A has superior sprinting ability or breath-holding ability!? Are you denying that people are different?!” No, what I’m saying is that it is absurd to take anatomic terminology (physical measures) and attempt to liken it to IQ—this is because nothing physical is being measured, not least because the mental isn’t physical nor reducible to it.
They were presuming to measure one’s “intelligence” and then stating that one has ‘superior’ “intelligence” to another—and that IQ tests were measuring this “superiority”. However, pscyhometrics is not a form of measurement—rankings are not measures.
Knowledge becomes reducible to a score in regard to standardized testing, so students, and in effect their learning and knowledge, are then reduced to their scores on these tests. And so, “such inequalities [with the SAT, which holds for all standardized testing] are structured into the very foundations of standardized test construction itself” (Au, 2009: 64). So what is built into a test can also be built out of it (Richardson, 1990, 2000; Hilliard, 2012).
In first constructing its scales and only then preceding to induce what they ‘measure’ from correlational studies, psychometry has got into the habit of trying to do what cannot be done and doing it the wrong way round anyway. (Nash, 1990: 133)
…psychometry fails to meet its claim of measurement and … its object is not the measurement of nonphysical human attributes, but the marking of some human beings as having more worth or value than other human beings … Psychometry’s claim to measurement serves to veil and justify the fundamentally political act of marking social value, and the role this practice plays in legitimating vast social inequalities. (Garrison, 2009: 30-31)
One of the best examples of a valid measure is temperature—and it has a long history (Chang, 2007). It is valid because there is a well-accepted theory of temperature, what is hot and what is cold. It is a physical property of measure which quantitatively expressed heat and cold. So thermometers were invented to quantify temperature, whereas thermometers were invented to quantify “intelligence.” Those, like Jensen, attempt to make the analogy between temperature and IQ, thermometers and IQ tests. Thermometers, with a high degree of reliability, measure temperature and so do, Jensen, claims, IQ tests.
So, IQ-ists claim, temperature is measured by thermometers, by definition, therefore intelligence is what IQ tests measure, by definition. But there is a problem with claims such as this. Temperature was verified independently of the measuring device originally used to measure it. Fixed points were first established, and then numerical thermometers could be constructed in which we then find a procedure to assign numbers to degrees of heat between and beyond the fixed points. The thermoscope was what was used for the establishment of fixed points, The thermoscope has no fixed points, so we do not have to circularly rely on the concept of fixed points for reference. And if it goes up and down, we can then rightly infer that the temperature of blood is not stable. But what validates the thermoscope? Human sensation. We can see that when we put our hand into water that is scalding hot, if we put the thermoscope in the same water and note that it rises rapidly. So the thermoscopes agreement with our basic sensations of ‘hot’ and ‘cold’ serve as reliability for the fact that thermoscopes reliably justify (in a non-circular way) that temperature is truly being measured. We are trusting the physical sensation we get from whichever surface we are touching, and from this, we can infer that thermoscopes do indeed validate thermometers making the concept of temperature validated in a non-circular manner and a true measure of hot and cold. (See Chang, 2007 for a full discussion on the measurement of temperature.)
Thermometers could be tested by the criterion of comparability, whereas IQ tests, on the other hand, are “validated” circularly with tests of educational achievement, other IQ tests which were not themselves validated. and job performance (Howe, 1997; Richardson and Norgate, 2015; Richardson, 2017) which makes the “validation” circular since IQ tests and achievement tests are different versions of the same test (Schwartz, 1975).
For example, take intro chemistry. When one takes the intro course, they see how things are measured. Chemists may be measuring in mols, grams, the physical state of a substance, etc. We may measure water displacement, reactions between different chemicals or whatnot. And although chemistry does not reduce to physics, these are all actual physical measures.
But the same cannot be said for IQ (Nash, 1990). We can rightly say that one scores higher than another on an IQ tests but that does not signify that some “thing” is being measured and this is because, to use the temperature example again, there is no independent validation of the “construct.” IQ is a (latent) construct but temperature is a quantitative measure of hot and cold. It really exists, though the same cannot be said about IQ or “intelligence.” The concept of “intelligence” does not refer to something like weight and temperature, for example (Midgley, 2018).
Physical properties are observables. We observe the mercury in a thermometer change based on the temperature inside a building or outside. One may say that we observe “intelligence” daily, but that is NOT a “measure”, it’s just a descriptive claim. Blood pressure is another physical measure. It refers to the pressure in large arteries of the system. This is due to the heart pumping blood. An IQ-ist may say that intelligence is the emergent product of thinking and that this is due to the brain and that correlations between life outcomes, IQ tests and educational achievements then validate the measure. But, as noted above, this is circular. The two examples given—blood pressure and temperature—are real things that are physically measurable, unlike IQ (a latent construct).
It also should be noted that Eysenck claimed that if the measurement of temperature is scientific, then so is the measurement of intelligence. But thermometers are not identical to standardized scales. However, this claim fails, as Nash (1990: 131) notes:
In order to measure temperature three requirements are necessary: (i) a scale, (ii) some thermometric property of an object and, (iii) fixed points of reference. Zero temperature is defined theoretically and successive interval points are fixed by the physical properties of material objects. As Byerly (p. 379) notes, that ‘the length of a column of mercury is a thermometric property presupposes a lawful relationship between the order of length and the temperature order under certain conditions.’ It is precisely this lawful relationship which does not exist between the normative IQ scale and any property of intelligence.
This is where IQ-ists go the most wrong: the emphatically state that their tests are measuring SOMETHING! which is important for life success since they correlate with them. Though, there is no precise specification of the measured object, no object of measurement and no measurement unit, so this “means that the necessary conditions for metrication do not exist [for IQ]” (Nash, 1990: 145).
Since IQ tests have a scoring system, the general impressions is that IQ tests measure intelligence just like thermometers measure temperature—but this is a nonsense claim. IQ is an artifact of the test’s norming population. These points do not reflect any inherent property of individuals, they reflect one’s relation to the society they are in (since all standardized tests are proxies for social class).
One only needs to read into the history of IQ testing—and standardized testing as a whole—to see how and why these tests were first devised. From their beginnings wkth Binet and then over to Terman, Yerkes, and Goddard, the goal has been clear—enact eugenic policies on those deemed “unintelligent” by IQ tests which just so happen to correspond with lower classes in virtue of how the tests were constructed, which goes back originally to Binet and Simon. The history of the concept makes it clear that it’s not based on any kind of measurement theory like blood pressure and temperature. It is based on a priori notions of the structure and distribution of “intelligence” which then reproduces the social structure and “justifies” notions of superiority and inferiority on the basis of “intelligence tests” (Mensh and Mensh, 1991; Au, 2009; Garrison, 2009).
The attempts to hijack anatomic terminology, as I have shown, are nonsense since one doesn’t talk in other anatomic terminology about other kinds of things; the first IQ-ists’ intentions were explicit in what they were attempting to “show” which still holds for all standardized testing today.
Binet, Terman, Yerkes, Goddard and others all had their own priors which then led them to construct tests in such a way that would lead to their desired conclusions. No “property” is being “measured” by these tests, nor can they be used to show one’s “genetic standing” (Jensen, 1970) which implies that one is “genetically superior” (this can be justified by reading Jensen’s interview with American Renaissance and his comments on the “genetic enslavement” of a group of we continued our welfare policy).
Physiological measures, such as blood pressure, and measures of hot and cold, such as temperature, are valid measures and in no way, shape or form—contra Jensen—like the concept of IQ/”intelligence”, which Jensen conflates (Edwards, 1973). Intelligence (which is extra-physical) cannot be measured (see Berka, 1983 and see Nash, 1990: chapter 8 for a discussion of the measurement objection of Berka).
For these reasons, we should not claim that IQ tests ‘measure’ “intelligence”, nor do they measure one’s “genetic standing” or how “superior” one is to another and we should claim that psychometrics is nothing more than a political ring.
In its essence the traditional notion of general intelligence may be a secularised version of the Puritan idea of the soul. … perhaps Galtonian intelligence had its roots in a far older kind of religious thinking. (John White, Personal space: The religious origins of intelligence testing)
In chapter 1 of Alas, Poor Darwin: Arguments Against Evolutionary Psychology, Dorothy Nelkin identifies the link between the founder of sociobiology E.O. Wilson’s religious beliefs and the epiphany he described when he learned of evolution. A Christian author then used Sociobiology to explain and understand the origins of our own sinfulness (Williams, 2000). But there is another hereditarian-type research program that has these kinds of assumptions baked-in—IQ.
Philosopher of education John White has looked into the origins of IQ testing and the Puritan religion. The main link between Puritanism and IQ was that of predestination. The first IQ-ists conceptualized IQ—‘g’ or general intelligence—to be innate, predetermined and hereditary. The predetermination line between both IQ and Puritanism is easy to see: To the Puritans, it was predestined whether or not one went to Hell before they even existed as human beings whereas to the IQ-ists, IQ was predestined, due to genes.
John White (2006: 39) in Intelligence, Destiny, and Education notes the parallel between “salvation and success, damnation and failure”:
Can we usefully compare the saved/damned dichotomy with the perceived contribtion of intelligence or the lack of it to success and failure in life, as conventionally understood? One thing telling against this is that intelligence testers claim to identify via IQ scores a continuous gamut of ability from lowest to highest. On the other hand, most of the pioneers in the field were … especially interested in the far ends of this range — in Galton’s phrase ‘the extreme classes, the best and the worst.’ On the other hand there were the ‘gifted’, ‘the eminent’, ‘those who have honourably succeeded in life’, presumably … the most valuable portion of our human stock. On the other, the ‘feeble-minded’, the ‘cretins’, the ‘refuse’ those seeking to avoid ‘the monotony of daily labor’, democracy’s ballast, not always useless but always a potential liability’.
A Puritan-type parallel can be drawn here—the ‘cretins and ‘feeble-minded’ are ‘the damned’ whereas ‘the extreme classes, the best and worst’ were ‘the saved.’ This kind of parallel can still be seen in modern conceptualizations of the debate and current GWASs—certain people have a certain surfeit of genes that influence intellectual attainment. Contrast with the Puritan “Certain people are chosen before they exist to either be damned or saved.” Certain people are chosen, by random mix-ups of genes during conception, to either be successful or not, and this is predetermined by the genes. So, genetic determinism when speaking of IQ is, in a way, just like Puritan predestination—according to Galton, Burt and other IQ-ists in the 1910s-1920s (ever since Goddard brought back the Binet-Simon Scales from France in 1910).
Some Puritans banned the poor from their communities seeing them as “disruptors to Puritan communities.” Stone (2018: 3-4) in An Invitation to Satan: Puritan Culture and the Salem Witch Trials writes:
The range of Puritan belief in salvation usually extended merely to members of their own communities and other Puritans. They viewed outsiders as suspicious, and people who held different beliefs, creeds, or did things differently were considered dangerous or evil. Because Puritans believed the community shared the consequences of right and wrong, often community actions were taken to atone for the misdeed. As such, they did not hesitate to punish or assault people who they deemed to be transgressors against them and against God’s will. The people who found themselves punished were the poor, and women who stood low on the social ladder. These punishments would range from beatings to public humiliation. Certain crimes, however, were viewed as far worse than others and were considered capital crimes, punishable by death.
Could the Puritan treatment of the poor be due to their beliefs of predestination? Puritan John Winthrop stated in his book A Model of Christian Charity that “some must be rich, some poor, some high and eminent in power and dignity, others mean and in subjection.” This, too, is still around today: IQ sets “upper limits” on one’s “ability ceiling” to achieve X. The poor are those who do not have the ‘right genes’. This is, also, a reason why IQ tests were first introduced in America—to turn away the poor (Gould, 1996; Dolmage, 2018). That one’s ability is predetermined in their genes—that each person has their own ‘ceiling of ability’ that they can reach that is then constrained by their genes is just like the Puritan predestination thesis. But, it is unverifiable and unfalsifiable, so it is not a scientific theory.
To White (2006), the claim that we have this ‘innate capacity’ that is ‘general’ this ‘intelligence’ is wanting. He takes this further, though. In discussing Galton’s and Burt’s claim that there are ‘ability ceilings’—and in discussing a letter he wrote to Burn—White (2006: 16) imagines that we give instruction to all of the twin pairs and that, their scores increase by 15 points. This, then, would have a large effect on the correlation “So it must be an assumption made by the theorist — i.e. Burt — in claiming a correlation of 0.87, that coaching could not successfully improve IQ scores. Burt replied ‘I doubt whether, had we returned a second time, the coaching would have affected our correlations” (White, 2006: 16). Burt seems to be implying that a “ceiling of ability” exists, which he got from his mentor, Galton. White continues:
It would appear that Galton nor Burt have any evidence for their key claim [that ability ceilings exist]. The proposition that, for all of us, there are individually differing ceilings of ability seems to be an assumption behind their position, rather than a conclusion based on telling grounds.
I have discussed elsewhere (White, 1974; 2002a: ch. 5) what could count as evidence for this proposition, and concluded that it is neither verifiable nor falsifiable. The mere fact that a child appears not able to get beyond, say, elementary algebra is not evidence of a ceiling. The failure of this or that variation in teaching approach fares no better, since it is always possible for a tracher to try some different approach to help the learner get over the hurdle. (With some children, so neurologically damaged that they seem incapable of language, it may seem that the point where options run out for the teacher is easier to establish than it is for other children. But the proposition in question is supposed to applu to all of us: we are all said to have our own mental ceiling; and for non-brain-damaged people the existence of a ceiling sems impossible to demonstrate.) It is not falsifiable, since for even the cleverest person in the world, for whom no ceiling has been discovered, it is always possible that it exists somewhere. As an untestable — unverifiable and unfalsifiable — proposition, the claim that we each have a mental ceiling has, if we follow Karl Popper (1963: ch. 1), no role in science. It is like the proposition that God exists or that all historical events are predetermined, both of which are equally untestable. As such, it may play a foundational role, as these two propositions have played, in some ideological belief system of belief, but has no place in empirical science. (White, 2006: 16)
Burt believed that we should use IQ tests to shoe-horn people into what they would be ‘best for’ on the basis of IQ. Indeed, this is one of the main reasons why Binet constructed what would then become the modern IQ test. Binet, influenced by Galton’s (1869) Hereditary Genius, believed that we could identify and help lower-‘ability’ children. Binet envisioned an ‘ideal city’ in which people were pushed to vocations that were based on their ‘IQs.’ Mensh and Mensh (1991: 23) quote Binet on the “universal applications” of his test:
Of what use is a measure of intelligence? Without doubt, one could conceive many possible applications of the process in dreaming of a future where the social sphere would be better organized than ours; where everyone would work according to his known apptitudes in such a way that non particle of psychic force should be lost for society. That would be the ideal city.
So, it seems, Binet wanted to use his test as an early aptitude-type test (like the ones we did in grammar school which ‘showed us’ which vocations we would be ‘good at’ based on a questionnaire). Having people in Binet’s ‘ideal city’ work based on their ‘known aptitudes’ would increase, not decrease, inequality so Binet’s envisioned city is exactly the same as today’s world. Mensh and Mensh (1991: 24) continue:
When Binet asserted that everyone would work to “known” aptitudes, he was saying that the individuals comprising a particular group would work according to the aptitudes that group was “known” to have. When he suggested, for example, that children of lower socioeconomic status are perfectly suited for manual labor, he was simply expressing what elite groups “know,” that is, that they themselves have mental aptitudes, and others have manual ones. It was this elitist belief, this universal rationale for the social status quo, that would be upheld by the universal testing Binet proposed.
White (2006: 42) writes:
Children born with low IQs have been held to have no hope of a professional, well-paid job. If they are capable of joining the workforce at all, they must find their niche as the unskilled workers.
Thus, the similarities between IQ-ist and religious (Puritan) belief comes clear. The parallels between the Puritan concern for salvation and the IQ-ist belief that one’s ‘innate intelligence’ dictated whether or not they would succeed or fail in life (based on their genes); both had thoughts of those lower on the social ladder, their work ethic and morals associated with the reprobate on the one hand and the low IQ people on the other; both groups believed that the family is the ‘mechanism’ by which individuals are ‘saved’ or ‘damned’—presuming salvation is transmitted based one’s family for the Puritans and for the IQ-ists that those with ‘high intelligence’ have children with the same; they both believed that their favored group should be at the top with the best jobs, and best education, while those lower on the social ladder should also get what they accordingly deserve. Galton, Binet, Goddard, Terman, Yerkes, Burt, and others believed that one was endowed with ‘innate general intelligence’ due to genes, according to the current-day IQ-ists who take the same concept.
White drew his parallel between IQ and Puritanism without being aware that one of the first anti-IQ-ists—and American Journalist named Walter Lippman—who also been made in the mid-1920s. (See Mensh and Mensh, 1991 for a discussion of Lippman’s grievances with the IQ-ists). Such a parralel between Puritanism and Galton’s concept of ‘intelligence’ and that of the IQ-ists today. White (2005: 440) notes “that virtually all the major players in the story had Puritan connexions may prove, after all, to be no more than coincidence.” Though, the evidence that White has marshaled in favor of the claim is interesting, as noted many parallels exist. It would be some huge coincidence for there to be all of these parallels without them being causal (from Puritanistic beliefs to hereditarian IQ dogma).
This is similar to what Oyama (1985: 53) notes:
Just as traditonal though placed biological forms in the mind of God, so modern thought finds many ways of endowing the genes with ultimate formative power, a power bestowed by Nature over countless milennia.
But this parallel between Puritanism and hereditarianism doesn’t just go back to the early 20th century—it can still be seen today. The assumption that genes contain a type of ‘information’ before activated by the physiological system for its uses still pervades our thought today, even though many others have been at the forefront to change that kind of thinking (Oyama, 1985, 2000; Jablonka and Lamb, 1995, 2005; Moore, 2002, 2016; Noble, 2006, 2011, 2016).
The links between hereditarianism and religion are compelling; eugenic and Puritan beliefs are similar (Durst, 2017). IQ tests have now been identified as having their origins in eugenic beliefs, along with Puritan-like beliefs have being saved/damned based on something that is predetermined, out of your control just like your genetics. The conception of ‘ability ceilings’—using IQ tests—is not verifiable nor is it falsifiable. Hereditarians believe in ‘ability ceilings’ and claim that genes contain a kind of “blueprint” (which is still held today) which predestines one toward certain dispositions/behaviors/actions. Early IQ-ists believed that one is destined for certain types of jobs based on what is ‘known’ about their group. When Binet wrote that, the gene was yet to be conceptualized, but it has stayed with us ever since.
So not only did the concept of “IQ” emerge due to the ‘need’ to ‘identify’ individuals for their certain ‘aptitudes’ that they would be well-suited for in, for instance, Binet’s ideal city, it also arose from eugenic beliefs and religious (Puritan) thinking. This may be why IQ-ists seem so hysterical—so religious—when talking about IQ and the ‘predictions’ it ‘makes’ (see Nash, 1990).
1. If differences in mental abilities are inherited, and
2. if success requires those abilities, and
3. if earnings and prestige depend on success,
4. then social standing will be based to some extent on inherited differences among people. (Herrnstein, 1971)
Richard Herrnstein’s article I.Q. in The Atlantic (Herrnstein, 1971) caused much controversy (Herrnstein and Murray, 1994: 10). Herrnstein’s syllogism argued that as environments become more similar and if differences in mental abilities are inherited and that success in life requires such abilities and if earning and prestige depends on success which is required by inheritable mental abilities then social standing will be based, “to some extent on inherited differences among people.” Herrnstein does not say this outright in the syllogism, but he is quite obviously talking about genetic inheritance. One can, however, look at the syllogism with an environmental lens, as I will show. Lastly, Herrnstein’s syllogism crumbles since social class is predictive of success in life when both IQ and social class are equated. So since family background and schooling explains the IQ-income relationship (a measure of success) then Herrnstein’s argument falls.
Note that Herrnstein came to measurement due to being a student of William Sheldon’s somatotyping. “Somatotyping lured the impressionable and young Herrnstein into a world promising precision and human predictability based on the measurement of body parts” (Hilliard, 2012: 22).
- If differences in mental abilities are inherited
Premise 1 is simple: “If differences in mental ability are inherited …” Herrnstein is obviously talking about genetic transmission, but we can look at this through a cultural/environmental lens. For example, Berg and Belmont (1990) showed that Jewish children of different socio-cultural backgrounds had different patterns of mental abilities, which were clustered in certain socio-cultural groups (all Jewish), showing that mental abilities are, in large part, culturally derived. Another objection could be that since there are no laws linking psychological/mental states with physical states (the mental is irreducible to the physical—meaning that mental states cannot be transmitted through (physical) genes) then such genetic transmission of psychological/mental traits is impossible. In any case, one can look at cultural transmission of mental abilities and disregard genetic transmission of psychological traits and the argument fails.
We can accept all of the premises of Herrnstein’s syllogism and argue an environmental case, in fact (bracketed words are my additions):
1. If differences in mental abilities are [environmentally] inherited, and
2. if success requires those [environmentally inherited] abilities, and
3. if earnings and prestige depend on [environmentally inherited] success,
4. then social standing will be based to some extent on [enviromnentally] inherited differences among people.
The syllogism hardly changes, but my additions change what Herrnstein was arguing for—environmental, not genetic differences cause success and along with it social standing among groups of people.
The Bell Curve (Herrnstein and Murray, 1994) can, in fact, be seen as an at-length attempt to prove the validity of the syllogism in an empiric matter. Herrnstein and Murray (1994: 105, 108-110) have a full discussion of the syllogism. “As stated, the syllogism is not fearsome” (Herrnstein and Murray, 1994: 105). They go on to state that if intelligence (IQ scores, AFQT scores) is only a bit influenced by genes and if success is only a bit influenced by intelligence then only a small amount of success is inherited (genetically). Note that their measure of “IQ” is the AFQT—which is a measure of acculturated learning, measuring school achievement (Roberts et al, 2000; Cascio and Lewis, 2005).
“How much is IQ a matter of genes?“, Herrnstein and Murray ask. They then discuss the heritability of IQ, relying, of course, on twin studies. They claim that the heritability of IQ is .6 based on the results of many twin studies. But the fatal flaw with twin studies is that the EEA is false and, therefore, genetic conclusions should be dismissed outright (Burt and Simons, 2014, 2015; Joseph, 2015; Joseph et al, 2015; Fosse, Joseph, and Richardson, 2015; Moore and Shenk, 2016). Herrnstein (1971) also discusses twin studies in the context of heritability, attempting to buttress his argument. But if the main vehicle used to show that “intelligence” (whatever that is) is heritable is twin studies, why, then, should we accept the conclusions of twin research if the assumptions that make the foundation of the field are false?
When I – when we – say 60 percent heritability, it’s not 60 percent of the variation. It is 60 percent of the IQ in any given person.” Later, he repeated that for the average person, “60 percent of the intelligence comes from heredity” and added that this was true of the “human species,” missing the point that heritability makes no sense for an individual and that heritability statistics are population-relative.
So Murray used the flawed concept of heritability in the wrong way—hilarious.
So the main point of Herrnstein’s argument is that environments become more uniform for everyone, then the power of heredity will shine through since the environment is uniform—the same—for everyone. But even if we could make the environment “the same”. What does this even mean? How is my environment the same, even if the surroundings are the same, say, if I would react or see something differently than you do on the same thing? The subjectivity of the mental disproves the claim that environments can be “more uniform.” Herrnstein claimed that if no variance in environment exists, then the only thing that can influence success is heredity. This is not wrong, but how would it be possible to equalize environments? Are we supposed to start from square one? Give up the wealth and status of the rich and powerful and “equalize environments” and, according to Herrnstein and the ‘meritocracy’, those who had earnings and prestige, which depended on success which depended on inherited mental abilities would still float to the top.
But what happens when both social class and IQ are equated? What predicts life success? Stephen Ceci reanalyzed the data from Terman’s Termites (the term coined for those in the study) and found something quite different from what Terman had assumed. There were three groups in Terman’s study—group A, B, and C. Groups A and C comprised the top and bottom 20 percent of the full sample in terms of life success. So at the start of the study, all of the children “were about equal in IQ, elementary school grades, and home evaluations” (Ceci, 1996: 82). Depending on the test used, the IQs of the children ranged between 142 to 155, which then decreased by ten points during the second wave due to regression and measurement error. So although group A and C had equivalent IQs, they had starkly different life outcomes. (Group B comprised 60 percent of the sample and enjoyed mediocre life success.)
Ninety-nine percent of the men in the group that had the best professional and personal accomplishments, i.e., group A were individuals who came from professional or business-managerial families that were well educated and wealthy. In contrast, only 17% if the children from group C came from professional and business families, and even these tended to be poorer and less well educated than their group A peers. The men in the two groups present a contrast on all social indicators that were assesssed: group A individuals preferred to play tennis, while group C men preferred to watch football and baseball; as children, the group A men were more likely to collect stamps, shells, and coinds than were the group C men. Not only were the fathers of the group A men better educated than those of group C, but so were their grandfathers. In short, even though the men in group C had equivalent IQs to group A, they did not have equivalent social status. Thus, when IQ is equated and social class is not, it is the latter that seems to be deterministic of professional success. Therefore, Terman’s findings, far from demonstrating that high IQ is associated with real-world success, show that the relationship is more complex and that the social status of these so-called geniuses’ families had a “long reach,” influencing their presonal and professional achievments throughout their adult lives. Thus, the title of Terman’s volumes Genetic studies of Genius, appears to have begged the question of the causation of genius. (Ceci, 1996: 82-83)
Ceci used the Project Talent dataset to analyze the impact of IQ on occupational success. This study, unlike Terman’s, looked at a nationally representative sample of 400,000 high-school students “with both intellectual aptitude and parental social class spanning the entire range of the population” (Ceci, 1996: 85). The students were interviewed in 1960, then about 4,000 were again interviewed in 1974. “For all practical purposes, this subgroup of 4,000 adults represents a stratified national sample of persons in their early 30s” (Ceci, 1996: 86). So Ceci and his co-author, Henderson, ran several regression analyses that involved years of schooling, family and social background and a composite score of intellectual ability based on reasoning, math, and vocabulary. They excluded those who were not working at the time due to being imprisoned, being housewives or still being in school. This then left them with a sample of 2,081 for the analysis.
They looked at IQ as a predictor of variance in adult income in one analysis, which then showed an impact for IQ. “However, when we entered parental social status and years of schooling completed as additional covariates (where parental social status was a standardized score, mean of 100, SD = 10, based on a large number of items having to do with parental income, housing costs, etc.—ranging from low of 58 to high of 135), the effects of IQ as a predictor were totally eliminated” (Ceci, 1996: 86). Social class and education were very strongly related to predictors of adult income. So “this illustrates that the relationship between IQ and adult income is illusory because the more completely specified statistical model demonstrates its lack of predictive power and the real predictive power of social and educational variables” (Ceci, 1996: 86).
The considered high, average, and low IQ groups, about equal size, while examining the regressions of earnings on social class and education within the groups.
Regressions were essentially homogeneous and, contrary to the claims by those working from a meritocratic perspective, the slope for the low IQ group was steepest (see Figure 4.1). There was no limitation imposed by low IQ on the beneficial effects of good social background on earnings and, if anything, there was a trend toward individuals with low IQ actually earning more than those with average IQ (p = .09). So it turns out that although both schooling and parental social class are powerful determinants of future success (which was also true in Terman’s data), IQ adds little to their influence in explaining adult earnings. (Ceci, 1996: 86)
The same was also true for the Project Talent participants who continued school. For each increment of school completed, there was also an effect on their earnings.
Individuals who were in the top quartile of “years of schooling completed” were about 10 times as likely to be receiving incomes in the top quartile of the sample as were those who were in the bottom quartile of “years of schooling completed.” But this relationship does not appear to be due to IQ mediating school attainment or income attainment, because the identical result is found even when IQ is statistically controlled. Interestingly, the groups with the lowest and highest IQs both earned slightly more than average-IQ students when the means were adjusted for social class and education (unadjusted meansat the modal value of social class and education = $9,094, $9,242, and $9,997 for low, average, and hhigh IQ groups, whereas the unadjusted means at this same modal value = $9,972, $9,9292, and $9,9278 for the low, average, and high IQs.) (Perhaps the low IQ students were tracked into plumbing, cement finishing and other well-paying jobs and the high-IQ students were tracked intothe professions, while average IQ students became lower paid teachers. social workers, ministers, etc.) Thus, it appears that the IQ-income relationship is really the result of schooling and family background, and not IQ. (Incidentally, this range in IQs from 70 to 130 and in SES from 58 to 135 covers over 95 percent of the entire population.) (Ceci, 1996: 87-88)
Ceci’s analysis is just like Bowles and Nelson’s (1974) analysis in which they found that earnings at adulthood were more influenced by social status and schooling, not IQ. Bowles and Nelson (1974: 48) write:
Evidently, the genetic inheritance of IQ is not the mechanism which reproduces the structure of social status and economic privilege from generation to generation. Though our estimates provide no alternative explanation, they do suggest that an explanation of intergeneration immobility may well be found in aspects of family life related to socio-economic status and in the effects of socio-economic background operating both directly on economic success, and indirectly via the medium of inequalities in educational attainments.
(Note how this also refutes claims from PumpkinPerson that IQ explains income—clearly, as was shown, family background and schooling explain the IQ-income relationship, not IQ. So the “incredible correlation between IQ and income” is not due to IQ, it is due to environmental factors such as schooling and family background.)
Herrnstein’s syllogism—along with The Bell Curve (an attempt to prove the syllogism)—is therefore refuted. Since social class/family background and schooling explains the IQ-income relationship and not IQ, then Herrnstein’s syllogism crumbles. It was a main premise of The Bell Curve that society is becoming increasingly genetically stratified, with a “cognitive elite”. But Conley and Domingue (2015: 520) found “little evidence for the proposition that we are becoming increasingly genetically stratified.”
IQ testing legitimizes social hierarchies (Chomsky, 1972; Roberts, 2015) and, in Herrnstein’s case, attempted to show that social hierarchies are an inevitability due to the genetic transmission of mental abilities that influence success and income. Such research cannot be socially neutral (Roberts, 2015) and so, this is yet another reason to ban IQ tests, as I have argued. IQ tests are a measure of social class (Ceci, 1996; Richardson, 2002, 2017), and such tests were created to justify existing social hierarchies (Mensh and Mensh, 1991).
Thus, the very purpose of IQ tests was to confirm the current social order as naturally proper. Intelligence tests were not misused to support hereditary theories of social hierarchies; they were perfected in order to support them. The IQ supplied an essential difference among human beings that deliberately reflected racial and class stratifications in order to justify them as natural.9 Research on the genetics of intelligence was far from socially neutral when the very purpose of theorizing the heritability of intelligence was to confirm an unequal social order. (Roberts, 2015: S51)
Herrnstein’s syllogism seems valid, but in actuality, it is not. Herrnstein was implying that genes were the casue of mental abilities and then, eventually, success and prestige. But one can look at Herrnstein’s syllogism from an environmentalist point of view (do note that the hereditarian/environmentalist debate is futile and continues the claim that IQ tests test ‘intelligence’, whatever that is). When matched for IQ—in regard to Terman’s Termites—family background and schooling explained the IQ-income relationship. Further analyses showed that this, again, was the case. Ceci (1996) showed again, replicating Terman’s and Bowles’ and Nelson’s (1974) analyses that social class and schooling, not IQ, explains income’s relationship with IQ.
The conclusion of Herrnstein’s argument can, as I’ve already shown, be an environmental one—through cultural, not genetic, transmission. Such arguments that IQ is ‘genetic’ and, thusly, certain individuals/groups will tend to stay in their social class, as Pinker (2002: 106) states: “Smarter people will tend to float into the higher strata, and their children will tend to stay there.” This, as has been shown, is due to social class, not ‘smarts’ (scores on an IQ test). In any case, this is yet another reason why IQ tests and the research behind them should be banned: IQ tests attempt to justify the current social order as ‘inevitable’ due to genes that influence mental abilities. This claim, though, is false and, therefore—along with the fact that America is not becoming more genetically stratified (Conley and Domigue, 2015)—Herrnstein’s syllogism crumbles. The argument attempts to justify the claim that class has a ‘genetic’ component (as Murray, 2020, attempts to show) but subsequent analyses and arguments have shown that Herrnstein’s argument does not hold.
Mary Midgley (1919-2018) is a philosopher perhaps most well-known for her writing on moral philosophy and rejoinders to Richard Dawkins after his publication of The Selfish Gene. Before her passing in October of 2018, she published What Is Philosophy For? on September 21st. In the book, she discusses ‘intelligence’ and its ‘measurement’ and comes to familiar conclusions.
‘Intelligence’ is not a ‘thing’ like, say, temperature and weight (though it is reified as one). Thermometers measure temperature, and this was verified without relying on the thermometer itself (see Hasok Chang, Inventing Temperature). Temperature can be measured in terms of units like kelvin, celsius, and Fahrenheit. The temperature is the available kinetic energy of heat; ‘thermo’ means heat while ‘meter’ means to measure, so heat is what is being measured with a thermometer.
Scales measure weight. If energy balance is stable, so too will weight be stable. Eat too much or too little, then weight gain or loss will occur. But animals seem to have a body set weight which has been experimentally demonstrated (Leibel, 2008). In any case, what a scale measures is the overall weight of an object which is done by measuring how much force exists between the weighed object and the earth.
The whole concept of ‘intelligence’ is hopelessly unreal.
Prophecies [like those of people who work on AI] treat intelligence as a quantifiable stuff, a standard, unvarying, substance like granulated sugar, a substance found in every kind of cake — a substance which, when poured on in larger quantities, always produces a standard improvement in performance. This mythical way of talking has nothing to do with the way in which cleverness — and thought generally — actually develops among human beings. This imagery is, in fact, about as reasonable as expecting children to grow up into steamrollers on the ground that they are already getting larger and can easily be trained to stamp down gravel on roads. In both cases, there simply is not the kind of continuity that would make any such progress conceivable. (Midgley, 2018: 98)
We recognize the divergence of interests all the time when we are trying to find suitable people for different situations. Thus Bob may be an excellent mathematician but is still a hopeless sailor, while Tim, that impressive navigator, cannot deal with advanced mathematics at all. which of them then should be considered the more intelligent? In real life, we don’t make the mistake of trying to add these people’s gifts up quantitatively to make a single composite genius and then hope to find him. We know that planners wanting to find a leader for their exploring expedition must either choose between these candidates or send both of them. Their peculiar capacities grow out of their special interests in topics, which is not a measurable talent but an integral part of their own character.
In fact, the word ‘intelligence’ does not name a single measurable property, like ‘temperature’ or ‘weight’. It is a general term like ‘usefulness’ or ‘rarity’. And general terms always need a context to give them any detailed application. It makes no more sense to ask whether Newton was more intelligent than Shakespeare than it does to ask if a hammer is more useful than a knife. There can’t be such a thing as an all-purpose intelligence, any more than an all-purpose tool. … Thus the idea of a single scale of cleverness, rising from the normal to beyond the highest known IQ, is simply a misleading myth.
It is unfortunate that we have got so used today to talk of IQs, which suggests that this sort of abstract cleverness does exist. This has happened because we have got used to ‘intelligence tests’ themselves, devices which sort people out into convenient categories for simple purposes, such as admission to schools and hospitals, in a way that seems to quantify their ability. This leads people to think that there is indeed a single quantifiable stuff called intelligence. But, for as long as these tests have been used, it has been clear that this language is too crude even for those simple cases. No sensible person would normally think of relying on it beyond those contexts. Far less can it be extended as a kind of brain-thermometer to use for measuring more complex kinds of ability. The idea of simply increasing intelligence in the abstract — rather than beginning to understand some particular kind of thing better — simply does not make sense. (Midgley, 2018: 100-101)
IQ researchers, though, take IQ to be a measure of a quantitative trait that can be measured in increments—like height, weight, and temperature. “So, in deciding that IQ is a quantitative trait, investigators are making big assumptions about its genetic and environmental background” (Richardson, 2000: 61). But there is no validity to the measure and hence no backing for the claim that it is a quantitative trait and measures what they suppose it does.
Just because we refer to something abstract does not mean that it has a referent in the real world; just because we call something ‘intelligence’ and say that it is tested—however crudely—by IQ tests does not mean that it exists and that the test is measuring it. Thermometers measure temperature; scales measure weight; IQ tests….don’t measure ‘intelligence’ (whatever that is), they measure acculturated knowledge and skills. Howe (1997: 6) writes that psychological test scores are “an indication of how well someone has performed at a number of questions that have been chosen for largely practical reasons” while Richardson (1998: 127) writes that “The most reasonable answer to the question “What is being measured?”, then, is ‘degree of cultural affiliation’: to the culture of test constructors, school teachers and school curricula.”
But the word ‘intelligence’ refers to what? The attempt to measure ‘intelligence’ is a failure as such tests cannot be divorced from their cultural contexts. This won’t stop IQ-ists, though, from claiming that we can rank one’s mind as ‘better’ than another on the basis of IQ test scores—even if they can’t define ‘intelligence’. Midgley’s chapter, while short, gets straight to the point. ‘Intelligence’ is not a ‘thing’ like height, weight, or temperature. Height can be measured by a ruler; weight can be measured by a scale; temperature can be measured by a thermometer. Intelligence? Can’t be measured by an IQ test.
The Vietnam War can be said to be the only war that America has lost. Due to a lack of men volunteering for combat (and a large number of young men getting exemptions from service from their doctors and many other ways), standards were lowered in order to meet quotas. They recruited those with low test scores who came to be known as ‘McNamara’s Morons’—a group of 357,000 or so men. With ‘mental standards’ now lower, the US now had men to fight in the war.
This decision was made by Secretary of Defense Robert McNamara and Lyndon B. Johnson. This came to be known as ‘McNamara’s Folly’—the title of a book on the subject (Hamilton, 2015). Hamilton (2015: 10) writes: “A total of 5,478 low-IQ men died will in the service, most of them in combat. Their fatality rate was three times as high as that of other GIs. An estimated 20,270 were wounded, and some were permanently disabled (including an estimated 500 amputees).”
Hamilton spends the first part of the book describing his friendship with a man named Johnny Gupton who could neither read nor write. He spoke like a hillbilly and used hillbilly phrasing. According to Hamilton (2010: 14):
I was surprised that he knew nothing about the situation he was in. He didn’t understand what basic training was all about, and he didn’t know that America was in a war. I tried to explain what was happening, but at the end, I could tell that he was still in a fog.
Hamilton describes an instance in which they were told that on their postcards they were to send home, they should not write anything “raunchy” like the sergeant said “Don’t be like that trainee who went through here and wrote ‘Dear Darlene. This is to inform you that Sugar Dick has arrived safely…’” (Hamilton, 2015: 16). Hamilton went on to write that Gupton did not ‘get’ the joke while “There was a roar of laughter” from everyone else. Gupton’s postcard, since he could not read or write, was written by Hamilton but he did not know his address; he could not state the name of a family member, only stating “Granny” while not able to state her full name. He could not tie his boots correctly, so Hamilton did it for him every morning. But he was a great boot-shiner, having the shiniest boots in the barracks.
Writing home to his fiancee, Hamilton (2015: 18) wrote to her that Gupton’s dogtags “provide him with endless fascination.”
Gupton had trouble distinguishing between left and right, which prevented him from marching in step (“left, right, left, right”) and knowing which way to turn for commands like “left face!” and “right flank march!” So Sergeant Boone tied an old shoelace around Gupton’s right wrist to help him remember which side of his body was the right side, and he placed a rubber band on the left wrist to denote the left side of the body. The shoelace and the rubberband helped, but Gupton was a but slow in responding. For example, he learned how to execute “left face” and “right face,” but he was a fraction of a second behind everyone else.
Gupton was also not able to make his bunk to Army standards, so Hamilton and another soldier did it for him. Hamilton stated that Gupton could also not distinguish between sergeants and officers. “Someone in the barracks discovered that Gupton thought a nickel was more valuable than a dime because it was bigger in size” (Hamilton, 2015: 26). So after that, Hamilton took Gupton’s money and rationed it out to him.
Hamilton then describes a time where he was asked by a Captain what they were doing and the situation they were in—to which he gave the correct responses. A Captain then asked Gupton “Which rank is higher, a captain or a general?” to which Gupton responded, “I don’t know, Drill Sergeant.” (He was supposed to say ‘Sir.’) The captain talking to Hamilton then said:
Can you believe this idiot we drafted? I tell you who else is an idiot. Fuckin’ Robert McNamara. How can he expect us to win a war if we draft these morons? (Hamilton, 2015: 27)
Captain Bosch’s contemptuous remark about Defense Secretary McNamara was typical of the comments I often heard from career Army men, who detested McNamara’s lowering of enlistment standards in order to bring low-IQ men into the ranks. (Hamilton, 2015: 28)
Hamilton heard one sergeant tell others that “Gupton should absolutely never be allowed to handle loaded weapons on his own” (Hamilton, 2015: 41). Gupton was then sent to kitchen duty where, for 16 hours (5 am to 9 pm), they would have to peel potatoes, clean the floors, do the dishes etc.
Hamilton (2015: 45) then describes another member of “The Muck Squad” but in a different platoon who “was unfazed by the dictatorial authority of his superiors.” When an officer screamed at him for not speaking or acting correctly he would then give a slightly related answer. When asked if he had shaved one morning, he “replied with a rambling of pronouncements about body odor and his belief that the sergeants were stealing his soap and shaving cream” (Hamilton, 2015: 45). He was thought to be faking insanity but he kept getting weirder; Hamilton was told that he would talk to an imaginary person in his bunk at night.
Murdoch was then told to find an electric floor buffer to buff the floors and he “wandered around in battalion headquarters until he found the biggest office, which belonged to the battalion commander. He walked in without knocking or saluting or seeking permission to speak, and asked the commander—a lieutenant colonel—for a buffer“. When in the office, he “proceeded to play with a miniature cannon and other memorabilia on the commander’s desk…” (Hamilton, 2015: 45). Murdoch was then found to have schizophrenia and was sent on home medical discharge.
Right before their tests of physical fitness to see if they qualified, young-looking sergeants shaved their heads and did the tests for them—Gupton got a 95 while Hamilton got an 80, which upset Hamilton because he knew he could have scored 100.
Hamilton ended up nearly getting heatstroke (with a 105-degree fever) and so he was separated from Gupton. He eventually ended up contacting someone who had spent time with Gupton. He did not “remember much about Gupton except that he was protected by a friendly sergeant, who had grown up with a “mentally handicapped” sister and was sensitive to his plight” (Hamilton, 2015: 51). Gupton was only given menial jobs by this sergeant. Hamilton discovered that Gupton had died at age 57 in 2002.
Hamilton then got sent to Special Training Company because while he was out with his fever he missed important days so his captain sent him to the Company to get “rehabilitation” before returning to another training company. They had to do log drills and a Physical Combat Proficiency Test, which most men failed. You needed 60 points per event to pass. The first event was crawling on dirt as fast as possible for 40 yards on your hands and knees. “Most of the men failed to get any points at all because they were disqualified for getting up on their knees. They had trouble grasping the concept of keeping their trunks against the ground and moving forward like supple lizards” (Hamilton, 2015: 59).
The second event was the horizontal ladder—imagine a jungle gym. Think of swinging like an ape through the trees. Hamilton, though as he admits not being strong, traversed 36 rungs in under a minute for the full 60 points. When he attempted to show them how to do it and watch them try, “none of the men were able to translate the idea into action” * (Hamilton, 2015: 60).
The third event was called run, dodge, and jump. They had to zig-zag, dodge obstacles, and side-step people and finally jump over a shallow ditch. To get the 60 points they had to make 2 trips in 25 seconds.
Some of the Special Training men were befuddled by one aspect of the course: the wooden obstacles had directional arros, and if you failed to go in the right direction, you were disqualified. A person of normal intelligence would observe the arrows ahead of time and run in the right direction without pausing or breaking stride. But these men would hesitate in order to study the arros and think about which way to go. For each second they paused, they lost 10 points. A few more men were unable to jump across the ditch, so they were disqualified. (Hamilton, 2015: 60-61)
Fourth was the grenade throw. They had to throw 5 training grenades 90 feet with scoring similar to that of a dartboard where the closer you are to the bull’s eye, the higher your score. They had to throw it from one knee in order to simulate battle conditions, but “Most of the Special Training men were too weak or uncoordinated to come close to the target, so they got a zero” * (Hamilton, 2015: 61). Most of them tried throwing it in a straight line like a baseball catcher rather than an arc like a center fielder to a catcher trying to throw someone out at home plate. “…the men couldn’t understand what he was driving at, or else they couldn’t translate it into action. Their throws were pathetic little trajectories” (Hamilton, 2015: 62).
Fifth was the mile-run—they had to do it in eight minutes and 33 seconds but they had to have their combat boots on. The other men in his group would immediately sprint, tiring themselves outs, they could not—according to Hamilton—“grasp or apply what the sergeants told them about the need to maintain a steady pace (not too slow, not too fast) throughout the entire mile.”
Hamilton then discusses another instance in which sergeants told a soldier that there was a cat behind the garbage can and to pick up a cat. But the cat turned out to be a skunk and he spent the next two weeks in the hospital getting treated for possible rabies. “He had no idea that the sergeants had played a trick on him.”
It was true that most of us were unimpressive physical specimens—overweight or scrawny or just plain unhealthy-looking, with unappealing faces and awkward ways of walking and running.
Sometimes trainees from other companiees, riding by in trucks, would hoot at us and shout “morons!” and “dummies!” Once, when a platoon marched by, the sergeant led the men in singing,
If I had a low IQ,
I’d be Special Training, too!
(It was sung to the tune of the famous Jody songs, as in “Ain’t no use goin’ home/Jody’s got your girls and gone.”)
Hamilton states that there was “One exception to the general unattractiveness” who “was Freddie Hensley.” He was consumed with “dread and anxiety”, always sighing. Freddie ended up being too slow to pass the rifle test with moving targets. Hamilton had wondered “why Freddie had been chosen to take the rifle test, but it soon dawned on me that he was selected because he was a handsome young man. Many people equate good looks with competence, and ugliness with incompetence. Freddie didn’t look like a dim bulb” (Hamilton, 2015: 72).
Freddy also didn’t know some ‘basic facts’ such as thunder precedes lightining. “As Freddy and I sat together on foot lockers and looked out the window, I passed the time by trying to figure out how close the lightning was. … I tried to explain what I was doing, and I was not surprised that Freddy could not comprehend. What was surprising was my discovery that Freddy did not know that lightning caused thunder. He knew what lightning was, he knew what thunder was, but he did not know that one caused the other” (Hamilton, 2015: 72).
The test used while the US was in Vietnam was the AFQT (Armed Forces Qualifying Test) (Maier, 1993: 1). As Maier (1993: 3) notes—as does Hamilton—men who chose to enlist could choose their occupation from a list whereas those who were forced had their occupation chosen for them.
For example, during the Vietnam period, the minimum selection standards were so low that many recruits were not qualified for any specialty, or the specialties for which they were qualified had already been filled by people with higher aptitude scores. These people, called no-equals, were rejected by the algorithm and had to be assigned by hand. Typically they were assigned as infantrymen, cooks, or stevedores. Maier (1993: 4)
Most of McNamara’s Morons
came from economically unstable homes with non-traditional family structures. 70% came from low-income backgrounds, and 60% came from singleparent families. Over 80% were high school dropouts, 40% read below a sixth grade level, and 15% read below a fourth grade level. 50% had IQs of less than 85. (Hsiao, 1989: 16-17)
Such tests were constructed from their very beginnings, though, to get this result.
… the tests’ very lack of effect on the placement of [army] personnel provides the clue to their use. The tests were used to justify, not alter, the army’s traditional personnel policy, which called for the selection of officers from among relatively affluent whites and the assignment of white of lower socioeconomic status go lower-status roles and African-Americans at the bottom rung. (Mensh and Mensh 1991: 31)
Reading through this book, the individuals that Hamilton describes clearly had learning disabilities. We do not need IQ tests to identify such individuals who clearly suffer from learning disabilities and other abnormalities (Sigel, 1989). Jordan Peterson claims that the military won’t accept people with IQs below 83, while Gottfredson states that
IQ 85 is a second important minimum threshold because the U.S. military sets its minimum enlistment standards at about this level. (2004, 28)
The laws in some countries, such as the United States, do not allow individuals with IQs below 80 to serve in the military because they lack adequate trainability. (2004, 18)
What “laws” do we have here in America ***specifically*** to disallow “individuals with IQs below 80 to serve in the military”? ** Where are the references? Why do Peterson and Gottfredson both make unevidenced claims when the claim in question most definitely needs a reference?
McNamara’s Folly is a good book; it shows why we should not let people with learning/physical/mental disabilities into the war. However, from the descriptions Hamilton gave, we did not need to learn their IQ to know that they could not be soldiers. It was clear as day that they weren’t all there, and their IQ score is irrelevant to that. The people described in the book clearly have developmental disabilities; how is IQ causal in this regard? IQ is an outcome, not a cause (Howe, 1997).
Both Jordan Peterson and Linda Gottfredson claim that the military will not hire a recruit with an IQ score of 80 or below; but they both just make a claim and attempting to validate the claim by searching through military papers does not validate the claim. In any case, IQ scores are not needed to learn that an individual has a learning disability (like how those described in the book clearly had). The unevidenced claims from Gottfredson and Peterson should not be accepted. In any case, one’s IQ is not causal in regard to their inability to, say, become a soldier as other factors are important, not a reified number we call ‘IQ.’ Their IQ scores were not their downfalls.
* Note that if one does not have a good mind-muscle connection then they won’t be able to carry-out novel tasks such as what they went through on the monkey bars.
1/20/2020 Edit ** I did not look hard enough for a reference for the claims. It appears that there is indeed a law (10 USC Sec. 520) that states that those that get between 1 and 9 questions right (category V) are not trainable recruits. The ASVAB is not not a measure of ‘general intelligence’, but is a measure of “acculturated learning” (Roberts et al, 2000). The ‘IQ test’ used in Murray and Herrnstein’s The Bell Curve was the AFQT, and it “best indicates poverty” (Palmer, 2018). This letter relates AFQT scores to the Weschler and Stanford-Binet—where the cut-off is 71 for the S-B and 80 for Weschler (both are category V). Returning to Mensh and Mensh (1991), such tests were—from their very beginnings—used to justify the current military order, having lower-class recruits in more menial jobs.