Home » IQ
Category Archives: IQ
…but the question “What is intelligence?” has only ever been answered by a shifting social consensus. So perhaps, lke the stuff of dreams and nightmares, it too belongs in the realm of mere appearances. (Goodey, 2011)
IQ groupings/cutoffs are arbitrary. What I mean by “arbitrary” is something without reason or justification; something that is not supported by facts or reasons. What is the reason/justification/facts/reasons for the groupings? The arbitrariness of IQ is also seen historically when we look at how score distributions were changed when different assumptions were had about the “nature” of “intelligence” (e.g., Terman, 1916; Hilliard, 2012). In this article, I will argue that IQ cutoffs are arbitrary with no rational justification for them; they just use them because they get the desired distributions they want.
The arbitrariness of such cutoffs and groupings have been known since the first tests were beginning to be created by American test constructors when Binet and Simon’s test was brought over from France by Goddard in 1910. (See here for a history of the testing movement and how they construct the test.) Terman (1916: 89) warned “That the boundary lines between such groups [feebleminded, dull, superior, genius etc.] are arbitrary.” It is also in this same book—The Measurment of Intelligence—that Terman adjusted the scores of men and women, adding and subtracting items that both men and women get right/wrong the most to even out their scores. This was done by Terman putting items on the test that men were good at (“arithmetical reasoning, giving differences between a president and a king, solving the form board, making change, reversing hands of a clock, finding similiarities, and solving “the induction test.”” [Terman, 1916: 81]) while he also put items on the test that women were good at (“drawing designs from memory, aesthetic comparison, comparing object from memory, answering the “comprehension questions”, repeating digits and sentences, tying a bow-knot, and finding rhymes” [Terman, 1916: 81]). This can also be seen in SAT differences between men and women, as Rosser (1989) points out. It is a matter of item selection/analysis and what the desired distribution of scores you want is.
Such arbitrary IQ cutoffs for these “groups” that Terman used value judgments on reflect the necessity of IQ-ists to attempt to conceptualize “intelligence” as normally distributed, with most falling in the middle and fewer on the tails—where “geniuses” and above are on the right and “mildly impaired and delayed”, per the 5th edition of the Stanford-Binet. But the normal distribution for “IQ” is a myth (Richardson, 2017: chapter 2). The construction of normally distributed IQ tests means that any and all “group distinctions” and “cutoffs” are arbitrary. The test was created first, AND THEN they attempt to deduce what it “measures” on the basis of correlations with other tests and of academic achievement. Further, even showing that there is a relationship between IQ scores and academic achievement is irrelevant, and this is because they are different versions of the same test—meaning that the item content is similar between the tests (Schwartz, 1975; Beaujean et al, 2018). It is a creation of the test’s constructors, not something that we just so happened to find when these tests were created.
Thus, the “bell curve” is an artifact, not a fact, of test construction (Simon, 1997). Items are added and removed on a sample population until the desired distribution is reached. And it is this artificial distribution that all IQ theorizing rests on and it is this artificial distribution that IQ-ists attempt to use for their cutoffs between different “grades” of “intelligence” between people. When it comes to the constructed bell curve, about 2.2% of people fall below 70, so the test was constructed to get this result. So, if the bell curve is an artificial production created by humans, then so is the classification system (“intelligence”). If the classification system is an artificial creation, then so too is the concept of “learning disability.” Bazemore, Shinaprayoon, and Martin write that:
By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p.166) population “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656).
So basically, test constructors had in mind—before they developed the test—who was or was not “intelligent” and then built the test to fit their desires. I can see someone saying “Why does this matter if it happened 100 years ago?” Well, it matters because there is no conceptual support for hereditarian thinking for psychological traits and if there is no support, then the only reason they persist is due to prejudice (Mensh and Mensh, 1991). Furthermore, newer IQ tests use similar items as older ones, and newer tests are “validated” against older tests (like the Stanford-Binet), and so, biases in those tests carry over, without conscious bias toward groups being an ultimate goal (Richardson, 2002: 287,
The arbitrariness of IQ can also be seen with the cutoff for learning disability—a cutoff of 70 or below is seen as the individual needing remedial help and so, the IQ test is a good instrument for these purposes. IQ tests are arbitrary in their use to reflect deficits in everyday functioning (Arvidsson and Granlund, 2016). Cutoffs for learning disabilities have fluctuated between IQ 70-85 over the years. Someone in the US is defined as “learning disabled” if there is a discrepancy between their academic achievement and their “intelligence” (i.e., IQ test score). But, is there any justification as either for a cutoff, where if one were under a certain magic number that they would then be “learning disabled”?
The answer is no, because IQ is irrelevant to the definition of learning disabilities (Siegel, 1988, 1989, 1993). It is absolutely unnnecessary to give IQ tests to identify the learning disabled and the existence of a discrepancy is not a necessary condition (Gunderson and Siegal, 2001). People under IQ 70 frequently do not need specialist services whereas people with IQs over 70 frequently do (Whitaker, 2004). Such tests only see WHAT a person has learned, they DO NOT estimate one’s intellectual “capability”; since IQ tests are tests of a certain type of knowledge, it then follows that exposure to the items on the test and test structure—along with other non-cognitive variables (Richardson, 2002)—explain test score differences and that these differences can be built into and out of the test based on certain a priori assumptions. It further follows that if one has a low score, they were not exposed to the item content and structure of the test and that it is not a “deficit of intelligence” like IQ-ists claim.
Webb and Whitaker (2012) describe the double think employed by many clinical psychologists, privately acknowledging the limitations of IQ tests and the arbitrary nature of the cut-off score of 70 IQ points that defines learning disability, whilst publicly and professionally talking about learning disabilities ‘as if it were a real, naturally occurring condition” (p. 440). Thus the diagnostic procedure involving IQ tests can be seen as a way of passing off culturally specific norms of competence (measured through arcane rituals of assessment) as if they were universal and incontrovertible. (Chinn, 2021: 137-138)
The arbitrariness of IQ 70 as the cutoff for mental disability also rears its head in the courtroom, when defendants are on trial for murder. In Atkins v. Virginia, SCOTUS rules that it was unconstitutional to execute intellectually disabled people. Then in Hall v. Florida, it was ruled that an IQ score by itself was not, by itself, useful in the justification of sentencing; they needed to use other medical/diagnostic criteria. Some people may cry something like “But IQ matters to people it does not matter to only when there is a defendant that has rumblings of being executed but he does not because it is found that he has an IQ below 70!” Nevermind the ethical debate on the death sentence, the arbitrary cutoff of 70 for mental retardation—which, as has been shown, does not hold—has numerous legal and societal consequences for the individual so unluckily deemed “disabled.”
Kanaya and Ceci (2007) argue that when an individual takes a test (whether or not they took it at the beginning or end of the test’s cycle) would have dictated whether or not they qualified for the arbitrary IQ 70 cutoff to not be executed. So the year in which a test is administered is literally a life or death issue. So the year in which a defendant on trial for murder was tested can determine whether or not they are put to death. Prosecutors in many US states have succesfully argued for “ethnic adjustments” for IQ. Sanger (2015) reviews many US cases in which prosecutors have done so. Arguing that “ethnic adjustments” for IQ are “logically, clinically, and unconstitutionally unsound”, he reviews studies that show that abuse, neglect, poverty, and trauma decrease test scores and that the abuse, neglect, poverty, and trauma can be epigenetically passed on through multiple generations. Sanger (2015: 148-149) concludes:
Furthermore, any correlations between the average IQ test scores of racial cohorts (or average scores of cohorts to the overall community norm) are not attributable to race and are heavily influenced by race-neutral environmental factors.397 Those raceneutral environmental factors include the effects of the environment of childhood abuse, stress, poverty, and trauma.398 Such adverse environmental (but race-neutral) factors likely result in phenotypic manifestations, which include epigenetic changes affecting intellectual ability and result in greater numbers of persons with intellectual disabilities within that population.399 The individuals whose intellectual ability is adversely affected by those harmful environmental factors are disproportionately represented by minority groups and among those facing the death penalty in the United States.400
Therefore, the actual recipients of death sentences—the people on death row—are poor, of color, and have disproportionately been subjected to stress, poverty, abuse, and trauma.401 These very people are likely to suffer from actual phenotypic/biological impairment in intellectual functioning that can be passed down by way of programmed epigenetic gene expression through generations.
Quite clearly, this arbitrary IQ 70 cutoff for “intellectual disability” has real-life implications for some people, and in some cases it is a life or death matter based on “ethnic adjustments” and when an individual took a specific test sometime in that test’s lifecycle before renorming. So Sanger showed that it is common that the IQ scores of blacks and “Hispanics” get adjusted upwards routinely, so they can face the death penalty. They push them above the “cutoff” so they can be executed.
In my view, such distinctions between “IQ groups” like that created by Terman—and even continuing into the present day—is an attempt at naturalizing “intellectual disability”; an attempt at saying that these are “natural kinds.” Though “intelligent people and intellectually disabled people are not natural kinds but historically contingent forms of human self-representation and social reciprocity, of relatively recent historical origin” (Goodey, 2011:13). So, intellectual disability, learning disability, intelligence—these are all social constructs (which do not denote natural kinds) and they change with the times.
But Herrnstein and Murray (1994: 1) argued that “the word intelligence describes something real and…it varies from person to person is as universal and ancient as any understanding about the state of being human. Literate cultures everywhere and throughout history have words for saying that some people are smarter than others.” But unfortunately for Herrnstein and Murray, “Intelligence as currently and conventionally understood by psychologists is a brashly modern notion” (Daston, 1992: 211).
The arbitrariness of the designation of “intelligence” means that “IQ/intelligence” is not a “thing”, nor is a “natural kind”, but it is indeed a socially constructed historical notion (Goodey, 2011), as is the concept of “giftedness” (Borland, 1997). The creation of these tests and indeed the label “intellectually disabled” is completely racialized (Chinn, 2021). The arbitrariness and socially constructed notion of what “intelligence” is can be seen just by analyzing the test items—they are heavily classed and racialized, specifically white middle-class. When it comes to the death penalty and IQ, there are very serious issues, as when an individual was given a test may be the deciding factor between life or death, along with the fact that minorities are more likely to be on death row and they are also more likely to experience abuse, trauma, etc which can then be passed on generationally and then also influence test scores—along with test construction, which there is no justification for a certain set of items, just whatever gets the desired distribution is what is “right”; that’s why “IQ” is arbitrary.
We need to dispense with the idea that there is a “thing” called “intelligence” and that it is biological; we need to understand that what we do call “intelligence” is socially constructed as what psychologists all “intelligent” is answering items right and getting a higher score on a test which are heavily biased toward certain races/classes in America. Once we understand that this concept is socially constructed and is not biological, maybe we won’t repeat past mistakes, like sterilizing tens of thousands of people in the name of eugenics.
Assertions derived from genetic reductionist ideas also ignore the abundant and burgeoning evidence that genes are outcomes of evolutionary processes and not bases of them. (Lerner, 2021: 449)
Genetic reductionism places (social) problems “in the genes” and so if these problems are “in the genes” then we can either (1) use gene therapy, (2) reduce the frequency of “the bad genes” in the population (eugenics) or (3) just live with these genetically caused problems. Social groups differ materially and they also differ genetically. To the gene-determinists, social positioning is genetically determined and it is due to a genetically determined intelligence. (See here for arguments against the claim.)
On the basis of heritability estimates derived from flawed methodologies like twin and adoption studies (Richardson and Norgate, 2005; Joseph, 2014; Burt and Simons, 2015; Moore and Shenk, 2016), hereditarians claim that traits like “IQ” (“intelligence”) are strongly genetically determined and if a trait is strongly genetically determined, then environmental interventions are doomed to fail (Jensen, 1969). Since IQ is said to have a heritability of .8, it is then claimed by the reductionist that environmental interventions are useless or near useless. Indeed, this was the conclusion of Jensen’s (1969) (infamous) paper—compensatory education has failed (an environmental intervention) and so the differences are genetic in nature.
Arguments like those have been forwarded for the better part of 100 years—and the arguments are false because they rely on false assumptions. The false assumptions are (1) that natural selection has caused trait differences between populations and (2) that genes are active—not passive—causes. (1) and (2) here can be combined for (3): genes that cause differences between groups were naturally selected and eventually fixed in the populations. This article will review some hereditarian thinking on natural selection and human variation, show how the theorizing is false, show how the theory of natural selection itself cannot possibly be true (Fodor and Piatteli-Palmarini, 2010) and finally will show that by accepting genetic reductionism we cannot achieve social justice since the causes of the social problems reduce to genes.
The ultimate claim from hereditarians is that human behavior, social life, and development can be reduced to—and explained by—genes. Social inequities are the target for social justice. Inequities refer to differences between groups that are avoidable and unjust. So the hereditarian attempts to reduce social ills to genes, thereby getting around what social justice activists want. They just reduce it to genes leading to possibilities (1)-(3) above. This has the possibility of being disastrous, for if we can fix the problems the hereditarians deem as “genetic”, then countless lives will not be made better.
Hereditarianism and natural selection
The crucial selection pressure responsible for the evolution of race differences in intelligence is identified as the temperate and cold environments of the northern hemisphere, imposing greater cognitive demands for survival and acting as selection pressures for greater intelligence. (Lynn, 2006: 135)
Hereditarians are neo-Darwininans and since they are neo-Darwinians, they hold that natural selection is the most powerful “mechanism” of evolution, causing trait changes by culling organisms with “bad” traits which then decreaes the frequency of the genes that supposedly cause the trait. But (1) natural selection cannot possibly be a mechanism as there is no agent of selection (that is, no mind selecting organisms with fitness-enhancing traits for a certain environment), nor are there laws of selection for trait fixation that hold across all ecologies (Fodor and Piattelli-Palmarini, 2010); and (2) genes aren’t causes of traits on their own—they are caused to give the information in them by and for the physiological system (Noble, 2011).
In his article Epistemological Objections to Materialism, in The Waning of Materialism, Koons (2010: 338) has an argument against natural selection with the same force as Fodor and Piattelli-Palmarini (2010):
The materialist must suppose that natural selection and operant conditioning work on a purely physical basis (without presupposing any prior designer or any prior intentionality of any kind). According to anti-Humean materialism, only microphysical properties can be causally efficacious. Nature cannot select a property unless that property is causally efficacious (in particular, it must causally contribute to survival and reproduction). However, few, if any, of the biological features that we all suppose to have functions (wings for flying, hearts for pumping bloods) constitute microphysical properties in a strict sense. All biological features (at least, all features above the molecular level) are physically realized in multiple ways (they consist of extensive disjunctions of exact physical properties). Such biological features, in the world of the anti-Humean materialist, don’t have effects—only their physical realizations do. Hence, the biological features can’t be selected. Since the exact physical realizations are rarely, if ever repeated in nature, they too cannot be selected. If the materialist responds by insisting that macrophysical properties can, in some loose and pragmatically useful way of speaking, be said to have real effects, the materialist has thereby returned to the Humean account, with the attendant difficulties described in the last sub-section. Hence, the materialist is caught in the dilemma.
We can grant that “nature” cannot select a trait if it isn’t causally efficacious. But combining Fodor’s argument with Koons’, if traits are linked then the fitness-enhancing trait cannot be directly selected-for since when you have one, you have the other. In any case, “natural selection” is part of the bedrock of hereditarian theorizing. It was natural selection—according to the hereditarian—that caused racial differences in behavior and “intelligence.” And so, if the hereditarian has no response to these two arguments against natural selection, then they cannot logically claim that the differences they describe are due to “natural selection.”
So the hereditarian theorist asserts that those with genes that conferred a fitness advantage had more children than those that didn’t which led to the selection of the genes that became fixed in certain populations. This is a familiar story—and the hereditarian uses this as a basis for the claim that racial differences in traits are the outcome of natural selection. These views are noted in Rushton (2000: 228-231), Jensen (1998: 170, 434-436) and Lynn (2006: Chapters 15, 16, and 17). But as Noble (2012) noted, there is no privileged level of causation—that is, before performing the relevant experiments, we cannot state that genes are causes of traits so this, too, refutes the hereditarian claim.
Rushton’s “Differential K” theory—where Mongoloids, Caucasians, and Africans differ on a suite of traits, which is influenced by their life histories and whether or not they are r- or K-strategists. Rushton (2000: 27) also claimed that “different environments cause, via natural selection, biological differences“, and by this he means that the environment acts as a filter. But the claim that the environment is the filter that causes variation in traits due to genes being “selected against” fails, too. When traits are correlated, the environmental filters (the mechanism by which selection theory purportedly works) cannot distinguish between causes of fitness and mere correlates of causes of fitness. So appealing to environments causing biological differences fails.
But unfortunately for hereditarians, a new analysis by Kevin Bird refutes the claim that natural selection is responsible for racial differences in “IQ” (Bird, 2021). So now, even assuming that genes can be selected-for their contribution to fitness and assuming that psychological traits can be genetically transmitted (which is false), hereditarianism still fails.
Hereditarianism and genetic reductionism
The ideology of IQ-ism is inherently reductionist. Behavioral geneticists, although they claim to be able to partition the relative contributions of genes and environment into nest little percentages, are also reductionists about “traits”—such as “IQ.” Further, if one is an IQ-ist then there is a good chance that they would fall into the reductionist camp of attempting to explain “intelligence” as being reducible to physiological brain states, and parts of the brain (such as Deary, 1996; Deary, Penke and Johnson, 2010; Jung and Haier, 2007; Haier, 2016; Deary, Cox, and Hill, 2021).
Reductionism can be simply stated as the parts have a sort of causal primacy over the whole. When it comes to psychological reduction, it is often assumed that genes would be the ultimate thing that it is reduced to, thereby, explaining how and why psychological traits differ between individuals—most importantly to the IQ-ist, “intelligence.” Behavioral geneticists have been reductionists since the field’s inception which has carried over to the present day (Panofsky, 2014). Even now, in the 3rd decade of the 2020s, reductionist accounts of behavior and psychology are still being pushed and the attempted reduction is reduction to genes. Now, this does not mean that environmental reduction has primacy—although we can and have identified environmental insults that do impede the ontogeny of certain traits.
Deary, Cox, and Hill (2021) argue for a “systems biology” approach to the study of “intelligence.” They review GWAS studies, neuroimaging studies and attempt a to lay the groundwork for a “mechanistic account” of intelligence, attempting to pick up where Jung and Haier (2007) left off. Unfortunately, the claims they make about GWAS fail (Richardson and Jones, 2020; Richardson, 2017b, 2021) and so do the claims they make about neuroreduction (Uttal, 2012).
This kind of genetic reductionism for psychological traits—along social ills such as addiction, violence, etc—then becomes ideological, in thinking that genes can explain how and why we have these kinds of problems. Indeed, this was why the first “IQ” tests were translated and brought to America—to screen and bar immigrants the IQ-ists saw as “feebleminded” (Richardson, 2003, 2011; Allen, 2006; Dolmage, 2018). Such tests were also used to sterilize people in the name of a eugenic ideology that was said to be for the betterment of society (Wilson, 2017). Thus, when such kinds of reductionism are applied to society and become an ideology, we definitely can see how such pseudoscientific beliefs can manifest itself in negative outcomes for the populace.
Ladner (2020:10) “constructed an economic analysis grounded in evolutionary biology.” Ladner claims that “Natural Selection is the main force that determines economic behavior.” Ladner claims that socialism will always fail since authoritarian regimes stifle our selfish proclivities while capitalism is grounded in selfishness and greed and so will always prevail over socialism. This is quite the unique argument… Of course Dawkins gets cited since Ladner is talking about selfishness, and these selfish genes are what cause the selfish behavior that allow capitalism to flourish. But the claim that genes are selfish is not a physiologically testable hypothesis (Noble, 2011) and DNA can’t be regarded as a replicator that’s independent from the cell (Noble, 2018). In any case, the argument in this book is that inequality is due to natural selection and there isn’t much we can do about capitalism since genes make us selfish and capitalism is all about selfishness. But being too selfish leads to such huge wealth inequalities we see in America today. The argument is pretty novel but it fails since it is a just-so story and the claims about “natural selection” are false.
Hereditarianism and mind-brain identity
Pairing hereditarianism with physicalism about the brain is an implicit assumption of the theory. Ever since our the power of our neuroimaging methods have increased since the beginning of the new millennium, many studies have come out correlating different psychological traits with different brain states. Processes of the mind, to the mind-brain identity theorist, are identical to states and processes of the brain (Smart, 2000). And in the past two decades, studies correlating physiological brain states and psychological traits have increased in number.
The leading theorists here are Haier and Jung with their P-FIT model. P-FIT stands for the Parieto-Frontal Integration Theory which first proposed by Jung and Haier (2007) who analyzed 37 neuroimaging studies. This, they claim, will “articulate a biology of intelligence.” (Also see Colom et al, 2009.) Again, correlations are expected but we can’t then claim that the brain states cause the trait (in this case, “IQ.” (See Klein, 2009 for a primer on the philosophical issues in neuroimaging.)
But in 2012 psychologist William Uttal published his book Reliability in Cognitive Neuroscience: A Meta-meta Analysis where he argues that pooling these kinds of studies for a meta-analysis (exactly what Jung and Haier (2007) did) “could lead to grossly distorted interpretations that could deviate greatly from the actual biological function of an individual brain.” Pooling multiple studies from different individuals taken at different times of the day under different conditions would lead to a wide variation in physiologies, nevermind the fact that motion artifacts can influence neuroimages, and it emotion and cognition are intertwined (Richardson, 2017a: 193).
The point is, we cannot pool together these types of studies in attempt to localize cognitive processes to states of the brain. This is exactly what P-FIT does (or attempts to do). In any case, the correlations found by Jung and Haier (2007) can be explained by experience. IQ tests are experience-dependent (that is, one must be exposed to the knowledge on the test and they must be familiar with test-taking), and so too are parts of the brain that change based on what the person experiences. We cannot say that the physiological states are the cause of the IQ score—since the items on the test are more likely to be found in the middle-class, they would then be more prepared for test-taking.
Socially disastrous claims
Views from the likes of Robert Plomin—that there’s “not much we can do” about “environmental effects” (Plomin, 2018: 174)—are socially disastrous. If such ideas become mainstream then we may desist with programs that actually help people, on the basis that “it doesn’t work.” But this claim, that environmental effects are “unsystematic and unstable” are derived from conclusions based largely on twin studies. So whatever variance is left is attributed to the environment. (Do note, though, the Plomin’s claim that DNA is a blueprint is false.)
Hereditarians like Plomin then claim that environmental effects derive from one’s genotype so in actuality environmental effects are genetic effects—this is called “genetic nurture.” By using this new concept, the reductionist can skirt around environmental effects and claim that the effect itself is genetic even though it’s environmental in nature. Genes, in this concept, are active causes, actively causing parental behavior. So genes cause parental behavior which then influences how parents treat/parent their children. In this way, behavioral geneticists can claim that environmental effects are genetic effects too. (This is like Joseph’s (2014) Argument A in its circularity.)
By applying and accepting genetic reductionist claims, we rob people of certain life chances and we don’t commit ourselves to social justice. Of course, to the hereditarian, since the environment doesn’t matter then genes do. So we need to look at society from the gene-view. But this view betrays how and why our current social structures are the way they are. “IQ” tests were originally created to show that the current social hierarchy is the “right one” and the hereditarian believes to have shown that the hierarchy is “genetic” and so each group has their place on the social hierarchy on the basis of IQ scores which reduce to genes (Mensh and Mensh, 1991).
But humans are social creatures and although hereditarians attempt to reduce human social life to genes (in a circular manner), they fail. And their failing has led to the destruction of thousands of lives (see the sterilizations in America during the 1900s and around the world eg in Cohen, 2016 and Wilson, 2017). Reductionist attempts of social behavior to genes have been tried over the past 20 years (e.g., Jensen, 1998; Rushton, 2000, Lynn, 2006; Hart, 2007) but they all fail (Lerner, 2018, 2021). Social (environmental) changes cannot undo what the genes have “set” in individuals and so, we need not pour money into social programs.
For instance, many hereditarians and criminologists have espoused eugenic views, like Jensen’s claim that welfare could lead to the genetic enslavement of a part of the population (Jensen, 1969: 95) and that we can “estimate a person’s genetic standing on intelligence” based on their IQ score (Jensen, 1970: 13) to name two things. It is no surprise to me that people who hold such reductionist views of genes and society that they would also hold eugenic views like these. It is, in fact, a logical endpoint of hereditarianism—“phasing out” populations, as Lynn described in his review of Cattell’s Beyondism (see Tucker, 2009).
The answer to hereditarianism
Since we have to reject hereditarianism, then the answer to hereditarian dogma is relational developmental systems (RDS) theory which emphasizes the actions of all developmental resources, not reducing development to one primary developmental resource as hereditarians do. Similar things have been noted by other developmental systems theorists, most notably Oyama (1985/2000). What is selected aren’t genes, or behaviors. What is actually selected are the whole developmental system. Genes aren’t active causes. So if we look at development as a dance with music, as Noble (2006, 2016) does, there are no sufficient causes for development, but there are necessary causes of which genes are but one part of the whole system.
The answer to hereditarianism is to simply show that it fails conceptually, it’s “causal” framework for explaining the differences is unsound (“natural selection”) and to show that multiple interacting factors are responsible for human development in the womb and throughout the life course. “Theories derived from RDS meta-theory focus on the “rules,” the processes that govern, or regulate, exchanges between (the functioning of) individuals and their contexts” (Lerner, 2021: 457). Hereditarianism relies on gene-selectionism. But genes are not leaders in evolution; development is inherently holistic, not reductonist.
The hereditarian program has its beginnings with Francis Galton and then after the first “IQ” test was made (Binet’s), American eugenicists used it to “show” who was a “moron” (meaning, who had a low “IQ” meaning “intelligence”). Tens of thousands of sterilizations were soon carried out since the causes of these problems were in these people’s genes and so, negative eugenics needed to be practiced in order to cull the population of these genes that lead to socially undesirable traits.
The hereditarian hypothesis is, therefore, a racist hypothesis, contra Carl (2019) who argued the hereditarian hypothesis is not racist while citing many arguments from critics. I won’t get into that here, as I have many articles on the matter that Carl (2019) discusses. But what I will say is that the hereditarian hypothesis is racist in virtue of (1) not being logically plausible (reductionism about the mind and physicalism are both false) and (2) the hypothesis ranks races on a scale of “higher to lower” (that is, a hierarchy). Racism “is a system of ranking human beings for the purpose of gaining and justifying an unequal distribution of political and economic power” (Lovechik, 2018). Therefore, the hereditarian hypothesis is a racist hypothesis, contra Carl’s protestations. Hereditarians may claim that their claims are stifled in the public debate, but for behavioral genetics at large, this is false (see Kampourakis, 2017). Carl (2018) claims that “stifling” the debate around race, genes, and IQ can do harm but he is sadly mistaken! By believing that differences that can be changed are “genetic”, they are deemed to be unfixable and the groups who have a higher frequency of which ever genes that are causally efficacious (supposedly) for IQ will then be treated differently.
If neuroreduction (mind-brain reduction) is false, if genetic reduction is false, and if natural selection isn’t a mechanism, then hereditarianism cannot possibly be true, and if heritability . The arguments given here go well with my conceptual arguments against hereditarianism for more force against the hereditarian hypothesis. Just like with my argument to ban IQ tests, we must ban hereditarian research too, since the outcomes can be socially disastrous (Lerner, 2021 part VI, Developmental Theory and the Promotion of Social Justice). By now, these kinds of “theories” and claims have been refuted to hell and back, and so, the only reason to hold these kinds of beliefs is due to racist attitudes (combined with some mental gymnastics).
So for these, and many more, reasons, we must outright reject genetic reductionism (not least because these claims derive from flawed studies with false assumptions like twin studies) along with its partner “natural selection.” We therefore must commit ourselves to social justice to ameliorate the effects of racist attitudes and views.
The hereditarian-environmentalist debate has been ongoing for over 100 years. In this time frame, many theories have been forwarded to explain the disparity between individuals and groups. In one camp you have the hereditarians who claim that any non-zero heritability for IQ scores means that hereditarianism is true (eg Warne, 2020); while in the other camp you have the environmentalists who claim that differences in IQ are explained by environmental factors. This debate has been raging since the 1870s when Francis Galton coined the “nature-nurture” dichotomy still rages today. Unfortunately, the environmentalists lend credence to IQ-ist claims that, however imperfect, IQ tests are “measures” of intelligence.
Three recent books on the matter are A Terrible Thing to Waste: Environmental Racism and its Assault on the American Mind (Washington, 2019), Making Kids Cleverer: A Manifesto for Closing the Advantage Gap (Didau, 2019), and Young Minds Wasted: Reducing Poverty by Enhancing Intelligence in Known Ways (Schick, 2019). All three of these authors are clearly environmentalists and they accept the IQ-ist canard that IQ—however crudely—is a “measure” of “intelligence.”
There are, however, no sound arguments that IQ tests “measure” intelligence and there is no response to the Berka/Nash measurement objection for the claim that IQ tests are a “measure” since no hereditarian can articulate the specified measured object, the object of measurement and the measurement unit for IQ; there is, also, no accepted definition or theory of “intelligence”. So how can we say that some”thing” is being “measured” with a certain instrument if we have no satisfactorily defined what we claim to be measuring with a well-accepted theory of what we are measuring (Richardson and Norgate, 2015; Richardson, 2017), with a specified measured object, object of measurement, and measurement unit (Berka, 1983a, 1983b; Nash, 1990; Garrison, 2003, 2009) for the construct we want to measure?
But the point of this article is that environmentalists push the hereditarian canard that IQ is equal to, however crudely, intelligence. And though the authors do have great intentions and are pointing to things that we can do to attempt to ameliorate differences between individuals in different environments, they still lend credence to the hereditarian program.
A Terrible Thing to Waste
Washington (2019) discusses the detrimental effects (and possible effects of others) of lead, mercury and other metals that are more likely to be found in low-income black and “Hispanic” communities along with iodine deficiencies. These environmental exposures retard normal brain development. But, one is not justified in claiming that they are measures of “intelligence”—at best, as Washington (2019) argues, we can claim that they are indexes of environmental polluters on the brains of developing children.
Intelligence is a product of environment and experience that is forged, not inherited; it is malleable, not fixed. (Washington, 2019: 20)
While it is true, as Washington claims, that we can mitigate these problems from the toxic metals and lack of other pertinent nutrients for brain development by addressing the problems in these communities, it does not follow that IQ is a “biological” thing. Yes, IQ is malleable (contra hereditarian claims), and Headstart does work to improve life outcomes, even though such gains “fade out” after the child leaves the enriched environment. Lead poisoning, for example, has led to a decrease in 23 million IQ points per year (Washington, 2019: 15). But I am not worried about lost IQ points (even though by saving the IQ points from being lost, we would then be directly improving the environments that lead to such a decrease). I am worried about the detrimental effects of these toxic chemicals on the developing minds of children; lost IQ points are an outcome of this effect. At best, IQ tests can track cognitive damage due to pollutants in these communities (Washington, 2019) but they do NOT “measure” intelligence. (Also note that lead consumption is associated with higher rates of crime so this is yet another reason to reduce the consumption of lead in these communities.)
Speaking of “measuring intelligence”, Washington (2019: 29) noted that Jensen (1969: 5) stated that while “intelligence” is hard to define, it can be measured… But how does that make any sense? How can you measure what you can’t define? (See arguments (i), (ii), and (iii) here.)
Big Lead, though, “actively encouraged landlords to rent to families with vulnerable young children by offering financial incentives” (Washington, 2019: 55). This was in reference to the researchers who studied the deleterious effects of lead consumption on developing humans. “The participation of a medical researcher, who is ethically and legally responsible for protecting human subjects, changes the scenario from a tragedy to an abusive situation. Moreover, this exposure was undertaken to enrich landlords and benefit researchers at the detriment of children” (Washington, 2019: 55). We realized that lead had deleterious effects on development as early as the 1800s (Rabin, 2008), but Big Lead pushed back:
[Lead Industries Association’s] vigorous “educational” campaign sought to rehabilitate lead’s image, muddying the waters by extolling the supposed virtues of lead over other building materials. It published flooding guides and dispatched expert lecturers to tutor architects, water authorities, plumbers, and federal officials in the science of how to repair and “safely” install lead pipes. All the while the [Lead Industries Association] staff published books and papers nd gave lectures to architects and water authorities that downplayed lead’s dangers. 11 (Washington, 2019: 60)
In any case, Washington’s book is a good read into the effects of toxic metals on brain development, and while we must do what we can to ameliorate the effects of these metals in low-income communities, IQ increases are a side effect of ameliorating the toxic metals in these communities.
Making Kids Cleverer
Didau (2019: 86) outright claims that “intelligence is measured by IQ tests”—he is outright pushing the hereditarian view that IQ tests “measure intelligence.” (A strange claim since on pg 95-96 he says that IQ tests are “a measure of relative intelligence.”)
In the book, Didau accepts many hereditarian premises—like the claim IQ tests measure intelligence, that heritability can partition genetic and environmental variation. Further, Didau says in the Acknowledgements (pg 11) that Ritchie’s (2015) Intelligence: All That Matters “forms the backbone for much of the information in Chapters 3 and 5.” So we can see here how the hereditarian IQ-ist stance colors his view on the relationship between “IQ” and “intelligence.” He also makes the bald claims that “intelligence is a good candidate for being the best researched and best understood characteristic of the human brain” and that it’s “also probably the most stable construct in all psychology” (pg 81).
Didau takes the view that intelligence is both a way to acquire knowledge as well as what type of knowledge we know (pg 83)—basically, it’s what we know and what we do with what we know along with ways to acquire said knowledge. What one knows is obviously a product of the environment they find themselves growing up in, and what we do with the knowledge we have is similarly down to environmental factors. Didau states that “Possibly the strongest correlations [with IQ] are those with educational outcomes” (pg 92). But Didau, it seems, fails to realize that this strong correlation is built into the test since IQ tests and scholastic achievement tests are different versions of the same test (Schwartz, 1975, Richardson, 2017).
In one of the “myths of intelligence” (Myth 3: Intelligence cannot be increased, pg 102) he discusses, Didau uses a similar analogy as myself. In an article on “the fade-out effect“, I argued that if one goes to the gym, works out and gets bigger and then stops going, we can then say that going to the gym is useless since once they leave the enriched environment they lose their gains. The direct parallels to Headstart, then, is clear with my gym/muscle-building analogy.
In another myth (Myth 4: IQ tests are unfair), Didau claims that if you get a low IQ score then you are probably unintelligent, while if you get a high one, it means you know the answers to the questions—which is obviously true. Of course, to know the answers to the questions (and to be able to reason the answers for some of the questions), one must be exposed to the knowledge that is contained in that test, or they won’t score high.
We can reject the use of IQ scores by racists, he says, who would use it to justify the superiority of their own groups and the inferiority of “the other”, all while not rejecting that IQ tests are valid (where have they been validated?). “Something real and meaningful” is being measured by these tests, and we have chosen to call this “intelligence” (pg 107). But we can say this about anything. Imagine having a test Y for X. But we don’t really know what X is, nor that Y really measures it. But because it accords with our a priori biases and since we have constructed Y to get the results we think we should see, even though we have no idea what X is, we assume that we are measuring what we set out to all without the basic requirements of measurement.
While Didau does seem to agree with some of the criticisms I’ve levied on IQ tests over the years (cross-cultural testing is pointless, IQ scores can be changed), he is, obviously, pushing a hereditarian IQ-ist agenda, cloaked as an environmentalist. He contradicts himself by saying that intelligence is measured by IQ tests without then saying what he says later about them—and I don’t think one should assume that he meant they are an “imperfect measure” of intelligence. (Imagine an imperfect measure of length—would we still be using it to build houses if it was only somewhat accurate?) Didau also agrees with the g theorists, in that there is a “general cognitive ability”, as well. He also agrees with Ritchie and Tucker-Drob (2018) and Ceci (1996) that schooling can and does increase IQ scores (as summer vacations show that IQ scores do decrease without schooling) (see Didau, 2018: Chapter 5). So while he does agree that IQ isn’t static and that education can and does increase it, he is still pushing a hereditarian IQ-ist model of “intelligence”—even though, as he admits, the concept of “intelligence” has yet to be satisfactorily defined.
Young Minds Wasted
In the last book, Young Minds Wasted (Schick, 2019), while he does dispense with many hereditarian myths (such as the myth of the normal distribution, see here), he still—through an environmentalist lens—justifies the claims that IQ tests test intelligence. While he masterfully dispenses with the “IQ is normally distributed” claim (see discussion in pg 180-186), the tagline of the book is “reducing poverty by increasing intelligence, in known ways.”
The poor’s intelligence is wasted, he says, by an intelligence-depressing environment. We can see the parallels here with Washington’s (2019) A Terrible Thing to Waste. Schick claims that “the single most important and widespread cause of poverty is the environmental constraints on intelligence” (pg 12, Schick’s emphasis). Now, like Washington, Schick says that a whole slew of chemicals and toxins decrease IQ (a truism) and by identity, intelligence. Of course, living in a deprived environment where one is exposed to different kinds of toxins and chemicals can retard brain development and lead to deleterious life outcomes down the line. But this fact does not mean that intelligence is being measured by these tests; it only shows that there are environments that can impede brain development which then is mirrored in a decrease in IQ scores.
Schick says that as intelligence increases, societal problems decrease. But, as I have argued at length, this is due to the way the tests themselves are constructed, involving the a priori biases of the test’s constructors. If we can construct a test with any kind of distribution we want to, and if the items emerge arbitrarily from the heads of the test’s constructors who then try them out on a standardized sample (Jensen, 1980: 71) looking for the results they want and assume a priori, then we can make it so that what we accept as truisms regarding the relationship between IQ and life events can be turned on their head, with no logical reason to accept one set of items over another, other than that one set has a bias in which it upholds a test constructor’s previously-held biases.
Schick does agree that “intelligent behavior” can change throughout life, based on one’s life experiences. But “Human intelligence is based on several genetically determined capabilities such as cognitive functions” (pg 39). He also claims that genetic factors determine while environmental factors influence cognitve functions, memory, and universal grammar.
Along with his acceptance that genetic factors can influence IQ scores and other aspects of the mind, he also champions heritability estimates as being able to partition genetic and environmental variation in traits (even though it can do no such thing; Moore and Shenk, 2016). He—uncritically—accepts the 80/20 genetic environmental heritability from Bouchard and the 60/40 genetic environmental heritability from Jensen and Murray and Herrnstein. These “estimates”—drawn mostly from family, twin, and adoption studies (Joseph, 2015)—though, are invalid due to the false assumptions the researchers hold, neverminding the conceptual difficulties with the concept of heritability (Moore and Shenk, 2016).
While Washington and Schick both make important points—that those who live in poor environments are at-risk of being exposed to certain things that disrupt their development—they both, along with Didau, accept the hereditarian claim that IQ tests are tests of intelligence. While each author has their own specific caveats (some of which I agree with, and other I do not), they keep the hereditarian claim alive by lending credence to their arguments, but not looking at it through a genetic lens.
While the authors have good intentions in mind and while the research they discuss is extremely important and interesting (like the effects of toxins and metals on the development of the brain and the development of the child), they—like their intellectual environmentalist ancestors—unwittingly lend credence to hereditarian claims that IQ tests measure intelligence but they go about the causes of individual and group differences in completely different ways. These authors, with their assertions, then, accept the claim that certain groups are less “intelligent” than others. But it’s not genes that are the cause—it’s the differences in environment that cause it. And while that claim is true—that the deleterious effects Washington and Schick discuss can and do retard normal development—it, in no way shape or form, means that “intelligence” is being measured.
Normal (brain) development is indeed a terrible thing to waste; we can teach kids more by exposing them to more things, and young minds are wasted by poverty. But by accepting these premises, one does not need to accept the hereditarian dogma that IQ tests are measures of some undefined thing with no theory. That poverty and the environments that those in poverty live in impedes normal brain development which is then reflected in IQ scores, it does not follow that these tests are “measuring” intelligence—they, at best, show environmental challenges that change the brain of the individual taking the test.
One needs to be careful with the language they use, lest they lend credence to hereditarian pseudoscience.
Ranking human worth on the basis of how well one compares in academic contests, with the effect that high ranks are associated with privilege, status, and power, does suggest that psychometry is best explored as a form of vertical classification and attending rankings of social value. (Garrison, 2009: 36)
Binet and Simon’s (1916) book The Development of Intelligence in Children is somewhat of a Bible for IQ-ists. The book chronicles the methods Binet and Simon used to construct their tests for children to identify those children who needed more help at school. In the book, they describe the anatomic measures they used. Indeed, before becoming a self-taught psychologist, Binet measured skulls and concluded that skull measurements did not correlate with teacher’s assessment of their students’ “intelligence” (Gould, 1995, chapter 5).
In any case, despite Binet’s protestations that Gould discusses, he wanted to use his tests to create what Binet and Simon (1916: 262) called an “ideal city.”
It now remains to explain the use of our measuring scale which we consider a standard of the child’s intelligence. Of what use is a measure of intelligence? Without doubt one could conceive many possible applications of the process, in dreaming of a future where the social sphere would be better organized than ours; where every one would work according to his own aptitudes in such a way that no particle force should be lost for society. That would be the ideal city. It is indeed far from us. But we have to remain among the sterner and matter-of-fact realities of life, since we here deal with practical experiments which are the most commonplace realities.
Binet disregarded his skull measurements as a correlate of ‘intelligence’ since they did not agree with teacher’s ratings. But then Binet and Simon (1916: 309) discuss how teachers assessed students (and gave an example). This is then how Binet made sure that the new psychological ‘measure’ that he devised related to how teachers assessed their students. Binet and Simon’s “theory” grouped certain children as “superior” and others as “inferior” in ‘intelligence’ (whatever that is), but did not pinpoint biology as the cause of the differences between the children. These groupings, though, corresponded to the social class of the children.
Thus, in effect, what Binet and Simon wanted to do was to organize society along a system of class social class lines while using his ‘intelligence tests’ to place the individual where they “belonged” on the hierarchy on the basis of their “intelligence”—whether or not this “intelligence” was “innate” or “learned.” Indeed, Binet and Simon did originally develop their scales to distinguish children who needed more help in school than others. They assumed that individuals had certain (intellectual) properties which then related to their class position. And that by using their scales, they can identify certain children and then place them into certain classes for remedial help. But a closer reading of Binet and Simon shows two hereditarians who wanted to use their tests for similar reasons that they were originally brought to America for!
Binet and Simon’s test was created to “separate natural intelligence and instruction” since they attempted to ‘measure’ the “natural intelligence” (Mensh and Mensh, 1991). Mensh and Mensh (1991: 23) continue:
Although Binet’s original aim was to construct an instrument for classifying unsuccessful school performers inferior in intelligence, it was impossible for him to create one that would do only that, i.e., function at only one extreme. Because his test was a projection of the relationship between concepts of inferiority and superiority—each of which requires the other—it was intrinsically a device for universal ranking according to alleged mental worth.
This “ideal city” that Binet and Simon imagine would have individuals work to their “known aptitudes”—meaning that individuals would work where their social class dictated they would work. This was, in fact, eerily similar to the uses of the test that Goddard translated and the test—the Stanford-Binet—that Terman developed in 1916.
Binet and Simon (1916: 92) also discuss further uses for their tests, irrespective of job placement for individuals:
When the work, which is here only begun, shall have taken its definite character, it will doubtless permit the solution of many pending questions, since we are aiming at nothing less than the measure of intelligence; one will this know how to compare the different intellectual levels not only according to age, but according to sex, social condition, and to race; applications of our method will be found useful to normal anthropology, and also to criminal anthropology, which touches closely upon the study of the subnormal, and will receive the principle conclusion of our study.
Binet, therefore, had similar views to Goddard and Terman, regarding “tests of intelligence” and Binet wanted to stratify society by ‘intelligence’ using his own tests (which were culturally biased against certain classes). Binet’s writings on the uses of his tests, ironically, mirrored what the creators of the Army Alpha and Beta tests believed. Binet believed that his tests could select individuals that were right for the role they would be designated to work. Binet, nevertheless, contradicted himself numerous times (Spring, 1972; Mensh and Mensh, 1991).
This dream of an “ideal city” was taken a step further when Binet’s test was brought and translated to America by Goddard and used for selecting military recruits (call it an “ideal country”). They would construct the test in order to “ensure” the right percentages of “the right” people who would be in their spot that was designated to them on the basis of their intelligence.
What Binet was attempting to do was to mark individual social value with his test. He claimed that we can use his (practical) test to select people for certain social roles. Thus, Binet’s dream for what his tests would do—and were then further developed by Goddard, Yerkes, Terman, et al—is inherent in what the IQ-ists of today want to do. They believe that there are “IQ cutoffs”, meaning that people with an IQ above or below a certain threshold won’t be able to do job X. However, the causal efficacy of IQ is what is in question along with the fact that IQ-ists have certain biases that they construct into their tests that they believe are ‘objective.’ But where Binet shifted from the IQ-ists of today and his contemporaries was that he believed that ‘intelligence’ is relative to one’s social situation (Binet and Simon, 1916: 266-267).
It is ironic that Gould believed that we could use Binet’s test (along with contemporary tests constructed and ‘validated’—correlated—with Terman’s Stanford-Binet test) for ‘good’; this is what Binet thought he would be done. But then, when the hereditarians had Binet’s test, they took Binet’s arguments to a logical conclusion. This also has to do with the fact that the test was constructed AND THEN they attempted to ‘see’ what was ‘measured’ with correlational studies. The ‘meaning’ of test scores, thusly, is seen after the fact with—wait for it—correlations with other tests that were ‘validated’ with other (unvalidated) tests.
This comes back to the claim that the mental can be ‘measured’ at all. If physicalism is false—and there are dozens of (a priori) arguments that establish this fact— and the mental is therefore irreducible to the physical, then psychological traits—and with it the mind—cannot be measured. It then follows that the mind cannot be measured. Further, rankings are not measures (Nash, 1990: 63), therefore, ability and achievement tests cannot be ‘measures’ of any property of individuals or groups—the object of measurement is the human and this was inherent in Binet’s original conception of his test that the IQ-ists in America attempted with their restrictions on immigration in the early 1900s.
This speaks to the fatalism that is inherent in IQ-ism—and was inherent since the creation of the first standardized tests (of which IQ tests are). These tests are—and have been since their inception—attempting to measure human worth and the differences and value between persons. The IQ-ist claims that “IQ tests must measure something.” And this ‘measurement’, it is claimed, is inherent in the fact that the tests have ‘predictive validity.’ But such claims of that a ‘property’ inherent in individuals and groups fails. The real ‘function’ of standardized testing is for assessment, and not measurement.
The “ideal city”, it seems, is just a city of IQ-ism—where one’s social roles are delegated by where they score on a test that is constructed to get the results the constructors want. Therefore, what Binet wanted his tests to do was (and some may ever argue it still is) being used to mark social worth (Garrison, 2004, 2009). Psychometry is therefore a political ring. It is inherently political and not “value-free.” Psychologists/psychometricians do not have an ‘objective science’, as the object of study (the human) can reflexively change their behavior when they know they are being studied. Their field is inherently political and they mark individuals and groups—whether they admit it or not. “Ideal cities” can lead to eugenic thinking, in any case, and to strive for “ideality” can lead to social harms—even if the intentions are ‘good.’
I started this blog almost 5 years ago. Currently (excluding this one), there are 480 articles on this blog. Searching my blog name “notpoliticallycorrect.me” on Google Scholar leads to two citations—one on “IQ” and obesity and the other on inclusionism about race when it comes to medicine. These two cites pretty much perfectly show my views and their change in the past 5 years since the creation of this blog. I will discuss both papers that cited me in turn.
In the journal Social and Human Sciences. Domestic and Foreign Literature (a sociology journal), a 2016 article I published (back in my “HBD” days titled “Race, Obesity, Poverty, and IQ, writing:
income and education (which in the latter case presumably correlates with IQ levels). They have the highest prevalence of type 2 diabetes. In terms of ethnicity, overweight indicators are as follows: 67.3% for whites, 75.6% for African Americans and 77.9% for Latinos. Summing up all this, we obtain, in the words of the authors of the study, “politically incorrect conclusions”: African Americans and Hispanics are more at risk of living in poverty, have lower IQ, higher rates of obesity and a chance of developing diabetes; The main factor in these correlations is the IQ level (Race, obesity, poverty and IQ, 2016).
Almost four years later (after my views have undergone a significant change) I would draw different conclusions. Blacks are 51% more likely to be obese than whites (Lincoln, Abdou, and Lloyd, 2016) with the cause being a multitude of factors. Though it seems that black American men with more African ancestry may be protected against central adiposity (Klimentidis et al, 2016). Racial disparities in obesity are due to an interaction of a multitude of factors (Byrd, Toth, and Stanford, 2018). Interestingly, black kids with obesity don’t perceive themselves as obese (Lankarani and Assani, 2018), which, presumably, is due to higher rates of obesity in the black population. Black girls are more likely to have an earlier menarche than white giris (e.g., Freedman et al, 2000) and it is because black girls are more likely to be obese than white girls which is due to the effects of leptin being permissive for menarche, from the higher levels of body fat in black girls (Salsberry, Reagen, and Pajer, 2010).
We must look to social determinants of health to understand why certain non-white populations are more likely to be obese than others. Looking at “IQ” as causal for obesity—which I used to believe—obscures much more than it helps. We can look to epigenetic effects, for example, regarding biological explanations of obesity (Krueger and Reithner, 2016), for instance high BMI in black women being related to saliva-based DNA methylation, which is used as a marker for aging (Li et al, 2019). Even perceived racism (it does not have to be actual) can have physiologic effects on black women, heigtening cortisol levels, leading to a heigtened obesity risk (Mwendwa et al, 2016).
In any case, it’s cool that I got cited but uncool that it was something that I don’t believe anymore.
The second citation comes from Rossi (2020: 13) in the journal Social Science Information titled New avenues in epigenetic research about race: Online activism around reparations for slavery in the United States citing my article Race, Medicine, and Epigenetics: How the Social Becomes Biological:
Consequently, social scientists’ opinions about epigenetic research dealing with race and slavery have sometimes been scrutinized by blog authors. For example, the article untitled [sic] ‘Race, medicine, and epigenetics: How the social becomes biological’ published in 2019 on the blog Notpoliticallycorrect features a long discussion on whether race could be seen as a viable variable to discuss the epigenetics of trauma, especially relating to slavery in the US.14 After summarizing the views of legal scholar and sociologist Dorothy Roberts, who has argued repeatedly in her works against the use of the concept of race in biomedical sciences, the author sides with philosophers Michael Hardimon and Shannon Sullivan, who are both enthusiastic about the inclusion of race to discuss genetics and epigenetics:
Race and medicine is a tendentious topic. On one hand, you have people like sociologist Dorothy Roberts (2012) who argues against the use of race in a medical context, whereas philosopher of race Michael Hardimon thinks that we should not be exclusionists about race when it comes to medicine. If there are biological races, and there are salient genetic differences between them, then why should we disregard this when it comes to a medically relevant context? [. . .] So, we should not be exclusionists (like Roberts), we should be inclusionists (like Hardimon). [. . .] Furthermore, acknowledging the fact that the social dimensions of race can help us understand how racism manifests itself in biology (for a good intro to this see Sullivan’s (2015) book The Physiology of Racist and Sexist Oppression, for even if the ‘oppression’ is imagined, it can still have very real biological effects that could be passed onto the next generation – and it could particularly affect a developing fetus, too). It seems that there is a good argument that the effects of slavery could have been passed down through the generations manifesting itself in smaller bodies.
Relying also on Jasienska’s research, the author of this blog post therefore dismissed the idea that race should not be applied to the medical field, while using the words and legitimacy of humanities scholars such as Hardimon and Sullivan to back up their claims. These contributions show the way journalists and various blog authors write about epigenetics by mixing together scientific articles in various fields (the social sciences, philosophy, psychiatry, social work) in an effort to bring more legitimacy to the topic. This process highlights the ways in which lay circles produce new connections between various papers and texts dealing with epigenetics, no matter how different their fields of expertise may be.
This shows a very sharp contrast with my current views and my older views on race and obesity. Before, thinking that obesity was “determined” by IQ (e.g., Kanazawa, 2012; Kanazawa, 2014) was an error—people with low “IQs” are more likely to be in poverty and have less access to good foods, along with the abundance of fast food restaurants in areas with a higher concentration of blacks (James et al, 2014). Black women, for instance, have a lower RMR than white women (Gannon, DiPietro, and Poehlman, 2000)
These two articles of mine that were cited (on similar issues, no less) show the evolution of my views over the past four or so years in between the publication of the two articles on this blog. This is a good case study on how the one can view the aetiology of one thing completely different based on the types of views they previously held. The views of obesity and race I hold now are much more complex than the reductive “it’s genes/IQ” kind of guy that I used to be. A more holistic view of obesity disparities, factoring in access to food (food swamps/deserts), income, location etc is more informative than looking just to “IQ” or “genes for” obesity—because even if “genes for” obesity exist and even if “genes for” obesity are distributed unevenly across races, the predominant determinant of weight will be activity level/caloric consumption, which is based on SES and other factors—not “IQ” or “obesity genes.” The social does become biological, and it does have consequences for obesity disparities between and within races.
The other day on Twitter, Davide Piffer made the claim that North and South Italians are “two different races” and that the North is “governed by morons from the South.” What would make him say that North and South Italians “are two different races”? Well, a new study was just published which looked into the genetic divergence of North and South Italians. It seems that Piffer is saying that the fact that North and South Italians are genetically distinct means that they are races. But this is an error in reasoning—it is fallacious to believe that just because two groups are genetically distinct that they are therefore races.
Sazzini et al (2020) show evidence that North and South Italians genetically diverged after the last glacial maximum (LGM). They state that there was “adaptive evolution” at “insulin-related loci” from Italian regions with temperate climates. The state that climatic factors differentiated those from the North and those from the South. The “adaptations” that those in the North have protect them from:
… we proposed climate-related selective pressures as potential factors having influenced adaptive evolution at insulin-related genes especially in the ancestors of Northern Italians. By regulating glucose homeostasis, adiposity, and thermogenesis in response to high-calorie diets adopted to cope with energetically demanding environmental conditions, these adaptive events might have also contributed to make people from Northern Italy less prone to develop T2D and obesity despite the challenging nutritional context imposed by modern lifestyles. Conversely, possible adaptations against pathogens and modulation of melanogenesis in response to high UV radiation are supposed to have played a role in reduced susceptibility of people from Southern Italy respectively to immunoglobulin-A nephropathy and skin cancers. Finally, multiple adaptive processes evolved by the overall Italian population, but having resulted more pronounced in people from the southern regions of the peninsula, were found to have the potential to secondarily modulate the longevity phenotype. Therefore, by pinpointing genetic determinants underlying biological adaptation of Italian population clusters in response to locally diverging environmental contexts, the present study succeeded in disclosing also valuable biomedical implications of such evolutionary events.
What they did was select 39 unrelated genomes, representative of the known genetic differences in Italian the Italian population, and then compare the differences 35 populations from all over Europe. They found divergence between the two occurred between 12 and 19 kya—they presume that the so-called “adaptations” for North Italians, being “adapted” to lower temperatures and higher-kcal food, and the so-called “adaptations” for South Italians being adapted to warmer climes, so they have “genes to protect against” skin cancer and pathogens—while gene variants ‘related’ to longer life were also showed changes in those genes.
The press release, though, cautions against adaptive conclusions:
The authors caution that although correlations may be drawn between evolutionary adaptations and current disease prevalence among populations, they are unable to prove causation, or rule out the possibility that more recent gene flow from populations exposed to diverse environmental conditions outside of Italy may have also contributed to the different genetic signatures seen between northern and southern Italians today.
While this is an interesting study (and it does need to reign back its ‘adaptive conclusions’), it does not show that North and South Italians are different races. If they are different races, how does it go? Is there a single North Italian race and a single South Italian race? Or are North Italians Caucasian, while South Italians would be African? Are there 5, 6, or 7 races in Piffer’s racial schema?
Like all hereditarians, he just assumes the existence of race—if this and that population are genetically distinct, then they must be races. Wow, how compelling an argument to show that races exist. But if North and South Italians are a different race on the basis of genetic differentiation, then so are East and West Germans (Nelis et al, 2009), North and South Germans (Heath et al, 2008), Southeast and Northwest Dutch (Lao et al, 2013), North and South Dutch (Byrne et al, 2020), Northern and Southern Swedes (Humphreys et al, 2011), East and West Fins (Kerminen et al, 2017), etc. Using genetic differentiation as a basis to show which population is or is not a race logically leads one down this path. Why not 7 billion races? Each individual is unique? Oh, wait: He would say something about “breeding populations” probably—and that’d be good because he would then be stating conditions for racehood, not just assuming their existence on the basis of genetic differentiation. Though, the claim would still fail.
Piffer has let his mask slip before—back in March he called immigrants to Italy “gorillas”, then saying that “Gorillas are nobler” because they would not take beds from the sick, since this was when Corona was really heating up in Italy. This is similar to what the “World’s Smartest Man” Christopher Langan said about gorillas and immigration. There seems to be a relationship between idiotic sayings about gorillas and immigration and racism… hmm…
In any case, the fact that North and South Italians are genetically distinct populations in no way, shape, or form, is evidence that they are different races. For if it is, then there are many, many races—even in countries with the same group of people, if we are to understand race how Piffer seems to understand it (any type of genomic differentiation between populations makes them races). So is each family on earth a different race? This is the kind of conclusion that Piffer’s lazy thinking leads to. Piffer is just like Murray—if populations cluster in genomic analyses then those population clusters are races. Two hereditarians—two assumptions that fail, since if we take them to their logical conclusion, there are more races than is traditionally stated. Piffer, it seems, just sees a group he is clearly biased agains (South Italians), sees they are distinct genomically from the North, and then says “Aha! these morons from the South who are governing us are just a different race than we are!” Clinal differences in skin color, too, don’t ‘prove’ that North and South Italians are a different race.
Too bad for Piffer, reality is different than in his own biased world. Italy is over two thousand years old—and the people in the North and the South belong to the same race. Piffer’s ‘research’ into the “IQs” of North and South Italians (Lynn, 2010; Piffer and Lynn, 2014; see Cornoldi et al, 2010; D’Amico et al, 2011; Robinson, Saggino, and Tommasi, 2011; Danielle and Malanima, 2011; Cornoldi, Giofre, and Martini, 2013; in any case, is (and has been) suspect—but now we know that he has other motivations than just iScience!
(Note: The Italianthro blog has a ton of information on Italy, its peopling, “IQ”, and other things. Check the blog out.)
The East Asian race has been held up as what a high “IQ” population can do and, along with the correlation between IQ and standardized testing, “HBDers” claim that this is proof that East Asians are more “intelligent” than Europeans and Africans. Lynn (2006: 114) states that the average IQ of China is 103. There are many problems with such a claim, though. Not least because of the many reports of Chinese cheating on standardized tests. East Asians are claimed to be “genetically superior” to other races as regards IQ, but this claim fails.
Chinese IQ and cheating
Differences in IQ scores have been noted all over China (Lynn and Cheng, 2013), but generally, the consensus is, as a country, that Chinese IQ is 105 while in Singapore and Hong Kong it is 103 and 107 respectively (Lynn, 2006: 118). To explain the patterns of racial IQ scores, Lynn has proposed the Cold Winters theory (of which a considerable response has been mounted against it) which proposes that the harshness of the environment in the ice age selected-for higher ‘general intelligence’ in East Asian and European populations; such a hypothesis is valid to hereditarians since East Asian (“Mongoloids” as Lynn and Rushton call them) consistently score higher on IQ tests than Europeans (eg Lynn and Dzobion, 1979; Lynn, 1991; Herrnstein and Murray, 1994). In a recent editorial in Psych, Lynn (2019) criticizes this claim from Flynn (2019):
While northern Chinese may have been north of the Himalayas during the last Ice Age, the southern Chinese took a coastal route from Africa to China. They went along the Southern coast of the Middle East, India, and Southeast Asia before they arrived at the Yangzi. They never were subject to extreme cold.
In response, Lynn cites Frost’s (2019) article where he claims that “mean intelligence seems to have risen during recorded history at temperate latitudes in Europe and East Asia.” Just-so storytelling about how and why such “abilities” were “selected-for”, the Chinese score higher on standardized tests than whites and blacks, and this deserves an explanation (the Cold Winters Theory fails; it’s a just-so story).
Before continuing, something must be noted about Lynn and his Chinese IQ data. Lynn ignores numerous studies on Chinese IQ—Lynn would presumably say that he wants to test those in good conditions and so disregards those parts of China with bad environmental conditions (as he did with African IQs). Here is a collection of forty studies that Lynn did not refer to—some showing that, even in regions in China with optimum living conditions, IQs below 90 are found (Qian et al, 2005). How could Lynn miss so many of these studies if he has been reading into the matter and, presumably, keeping up with the latest findings in the field? The only answer to the question is that Richard Lynn is dishonest. (I can see PumpkinPerson claiming that “Lynn is old! It’s hard to search through and read every study!” to defend this.)
Although the Chinese are currently trying to stop cheating on standardized testing (even a possible seven-year prison sentence, if caught cheating, does not deter cheating), cheating on standardized tests in China and by the Chinese in America is rampant. The following is but a sample of what could be found doing a cursory search on the matter.
One of the most popular ways of cheating on standardized tests is to have another person take the exam for you—which is rampant in China. In one story, as reported by The Atlantic, students can hire “gunmen” to sit-in on tests for them, though measures are being taken to fight back against that such as voice recognition and finger-printing. It is well-known that much of the cheating on such tests are being done by international students.
Even on the PISA—which is used as an “IQ” proxy since they correlate highly (.89) (Lynn and Mikk, 2009)—though, there is cheating. For the PISA, each country is to select, at random, 5,000 of their 15-year-old children around the country and administer the PISA—they chose their biggest provinces which are packed with universities. Further, score flucuations attract attention which indicates dishonesty. In 2000, more than 2000 people protested outside of a university to protest a new law which banned cheating on tests.
The rift amounted to this: Metal detectors had been installed in schools to route out students carrying hearing or transmitting devices. More invigilators were hired to monitor the college entrance exam and patrol campus for people transmitting answers to students. Female students were patted down. In response, angry parents and students championed their right to cheat. Not cheating, they said, would put them at a disadvantage in a country where student cheating has become standard practice. “We want fairness. There is no fairness if you do not let us cheat,” they chanted. (Chinese students and their parents fight for the right to cheat)
Surely, with rampant cheating on standardized tests in China (and for Chinese Americans), we can trust the Chinese IQ numbers in light of the news that there is a culture of cheating on tests in China and in America.
“Genetic superiority” and immigrant hyper-selectivity
Strangely, some proponents of the concept of “genetic superiority” and “progressive evolution” still exist. PumpkinPerson is one of those proponents, writing articles with titles like “Genetically superior: Are East Asians more socially intelligent too?, More evidence that East Asians are genetically superior, Oriental populations: Genetically superior, even referring to a fictional character on a TV show as a “genetic superior.” Such fantastical delusions come from Rushton’s ridiculous claim that evolution may be progressive and that some populations are, therefore, “more evolved” than others:
One theoretical possibility is that evolution is progressive and that some populations are more “advanced” than others. Rushton, 1992
Such notions of “evolutionary progress” and “superiority“—even back in my “HBD” days—never passed the smell test to me. In any case, how can East Asians be said to be “genetically superior”? What do “superior genes” or a “superior genome” look like? This has been outright stated by, for example, Lynn (1977) who prolcaims—for the Japanese—that his “findings indicate a genuine superiority of the Japanese in general intelligence.” This claim, though, is refuted by the empirical data—what explains East Asian educational achievement is not “superior genes”, but the belief that education is paramount for upward social mobility, and so, to preempt discrimination, this would then be why East Asians overperform in school (Sue and Okazaki, 1990).
Furthermore, the academic achievement of Asian cannot be reduced to Asian culture—the fact that they are hyper-selected is why social class matters less for Asian Americans (Lee and Zhou, 2017).
These counterfactuals illustrate that there is nothing essential about Chinese or Asian culture that promotes exceptional educational outcomes, but, rather, is the result of a circular process unique to Asian immigrants in the United States. Asian immigrants to the United States are hyper-selected, which results in the transmission and recreation of middle-class specific cultural frames, institutions, and practices, including a strict success frame as well as an ethnic system of supplementary education to support the success frame for the second generation. Moreover, because of the hyper-selectivity of East Asian immigrants and the racialisation of Asians in the United States, stereotypes of Asian-American students are positive, leading to ‘stereotype promise’, which also boosts academic outcomes
Inequalities reproduce at both ends of the educational spectrum. Some students are assumed to be low-achievers and undeserving, tracked into remedial classes, and then ‘prove’ their low achievement. On the other hand, others are assumed to be high-achievers and deserving of meeting their potential (regardless of actual performance); they are tracked into high-level classes, offered help with their coursework, encouraged to set their sights on the most competitive four-year universities, and then rise to the occasion, thus ‘proving’ the initial presumption of their ability. These are the spill-over effects and social psychological consequences of the hyper-selectivity of contemporary Asian immigration to the United States. Combined with the direct effects, these explain why class matters less for Asian-Americans and help to produce exceptional academic outcomes. (Lee and Zhou, 2017)
The success of second-generation Chinese Americans has, too, been held up as more evidence that the Chinese are ‘superior’ in their mental abilities—being deemed ‘model minorities’ in America. However, in Spain, the story is different. First- and second-generation Chinese immigrants score lower than the native Spanish population on standardized tests. The ‘types’ of immigrants that have emigrated has been forwarded as an explanation for why there are differences in attainments of Asian populations. For example, Yiu (2013: 574) writes:
Yet, on the other side of the Atlantic, a strikingly different story about Chinese immigrants and their offspring – a vastly understudied group – emerges. Findings from this study show that Chinese youth in Spain have substantially lower educational ambitions and attainment than youth from every other nationality. This is corroborated by recently published statistics which show that only 20 percent of Chinese youth are enrolled in post-compulsory secondary education, the prerequisite level of schooling for university education, compared to 40 percent of the entire adolescent population and 30 percent of the immigrant youth population in Catalonia, a major immigrant destination in Spain (Generalitat de Catalunyan, 2010).
… but results from this study show that compositional differences across immigrant groups by class origins and education backgrounds, while substantial, do not fully account for why some groups have higher ambitions than others. Moreover, existing studies have pointed out that even among Chinese American youth from humble, working-class origins, their drive for academic success is still strong, most likely due to their parents’ and even co-ethnic communities’ high expectations for them (e.g., Kao, 1995; Louie, 2004; Kasinitz et al., 2008).
The Chinese in Spain believe that education is a closed opportunity and so, they allocate their energy elsewhere—into entrepreneurship (Yiu, 2013). So, instead of Asian parents pushing for education, they push for entrepreneurship. What this shows is that what the Chinese do is based on context and how they perceive how they will be looked at in the society that they emigrate to. US-born Chinese immigrants are shuttled toward higher education whereas in the Netherlands, the second-generation Chinese have lower educational attainment and the differences come down to national context (Noam, 2014). The Chinese in the U.S. are hyper-selected whereas the Chinese in Spain are not and this shows—the Chinese in the US have a high educational attainment whereas they have a low educational attainment in Spain and the Netherlands—in fact, the Chinese in Spain show lower educational attainment than other ethnic groups (Central Americans, Dominicans, Morrocans; Lee and Zhou, 2017: 2236) which, to Americans would be seen as a surprise
Second-generation Chinese parents match their intergenerational transmission of their ethnocultural emphasis on education to the needs of their national surroundings, which, naturally, affects their third-generation children differently. In the U.S., adaptation implies that parents accept the part of their ethnoculture that stresses educational achievement. (Noam, 2014: 53)
So what explains the higher educational attainment of Asians? A mixture of culture and immigrant (hyper-) selectivity along with the belief that education is paramount for upward mobility (Sue and Okazaki, 1990; Hsin and Xie, 2014; Lee and Zhou, 2017) and the fact that what a Chinese immigrant chooses to do is based on national context (Noam, 2014; Lee and Zhou, 2017). Poor Asians do indeed perform better on scholastic achievement tests than poor whites and poor ‘Hispanics’ (Hsin and Xie, 2014; Liu and Xie, 2016). Teachers even favor Asian American students, perceiving them to be brighter than other students. But what are assumed to be cultural values are actually class values which is due to the hyper-selectivity of Asian immigrants to America (Hsin, 2016).
The fact that the term “Mongoloid idiot” was coined for those with Down syndrome because they looked Asian is very telling (see Hilliard, 2012 for discussion). But, the IQ-ists switched from talking about Caucasian superiority to Asian superiority right as the East began their economic boom (Liberman, 2001). The fact that there were disparate “estimates” of skulls in these centuries points to the fact such “scientific observations” are painted with a cultural brush. See eg table 1 from Lieberman (2001):
This tells us, again, that our “scientific objectivity” is clouded by political and economic prejudices of the time. This allows Rushton to proclaim “If my work was motivated by racism, why would I want Asians to have bigger brains than whites?” Indeed, what a good question. The answer is that the whole point of “HBD race realism” is to denigrate blacks, so as long as whites are above blacks in their little self-made “hierarchy” no such problem exists for them (Hilliard, 2012).
Note how Rushton’s long debunked- r/K selection theory (Anderson, 1991; Graves, 2002) used the current hierarchy and placed dozens of traits on a hierarchy where it was M > C > N (Mongoloids, Caucasoids, and Negroids respectively, to use Rushton’s outdated terminology). It is a political statement to put the ‘Mongoloids’ at the top of the racial hierarchy; the goal of ‘HBD’ is to denigrate blacks. But, do note that in the late 19th to early 20th century that East Asians were deemed to have small brains, large penises, and that Japanese men, for instance, would “debauch their [white] female classmates” (quoted in Hilliard, 2012: 91).
The “IQ” of China (along with scores on other standardized tests such as TIMMS and PISA), in light of the scandals occurring regarding standardized testing should be suspect. Richard Lynn has failed to report dozens of studies that show low IQ scores for China, thusly inflating their scores. This is, yet again, another nail in the coffin for the ‘Cold Winter Theory’, since the story is formulated on the basis of cherry-picked IQ scores of children. I have noted that if we have different assumptions that we would have different evolutionary stories. Thus, if the other data were provided and, say, Chinese IQ were found to be lower, we would just create a story to justify the score. This is illustrated wonderfully by Flynn (2019):
I will only say that I am suspicious of these because none of us can go back and really evaluate environment and mating patterns. Given free reign, I can supply an evolutionary scenario for almost any pattern of current IQ scores. If blacks had a mean IQ above other races I could posit something like this: they benefitted from exposure to the most rigorous environmental conditions possible, namely, competition from other people. Thanks to greater population pressures on resources, blacks would have benefitted more from this than any of those who left at least for a long time. Those who left eventually became Europeans and East Asians.
The hereditarians point to the academic success of East Asians in America as proof that IQ tests ‘measure’ intelligence, but East Asians in America are a hyper-selected sample. As the references I have provided show, second-generation Chinese immigrants show lower educational attainments than other ethnies (the opposite is true in America) and this is explained by the context that the immigrant family finds themselves in—where do you allocate your energy? Education or entrepreneurship? Such choices seem to be class-based due to the fact education is championed by the Chinese in America and not in Spain and the Netherlands—then dictate, and they also refute any claims of ‘genetic superiority’—they also refute, for that matter, the claim that genes matter for educational attainment (and therefore IQ)—although we did not need to know this to know that IQ is a bunk ‘measure’.
So if Chinese cheat on standardized tests, then we should not accept their IQ scores; the fact that they, for example, provide non-random children from large provinces speaks to their dishonesty. They are like Lynn, in a way, avoiding the evidence that IQ scores are not what they seem—both Lynn and the Chinese government are dishonest cherry-pickers. The ‘fact’ that East Asian educational attainment can be attributed to genes is false; it is attributed to hyper-selectivity and notions of class and what constitutes ‘success’ in the country they emigrate to—so what they attempt is based on (environmental) context.
In a conversation with an IQ-ist, one may eventually find themselves discussing the concept of “superiority” or “inferiority” as it Regards IQ. The IQ-ist may say that only critics of the concept of IQ place any sort of value-judgments on the number one gets when they take an IQ test. But if the IQ-ist says this, then they are showing their ignorance regarding the history of the concept of IQ. The concept was, in fact, formulated to show who was more “intelligent”—“superior”—and who was less “intelligent”—“inferior.” But here is the thing, though: The terms “superior” and “inferior” are, however, anatomic which shows the folly of the attempted appropriation of the term.
Superiority and inferiority
If one wants to find early IQ-ists talking about superiority and inferiority regarding IQ, they would only need to check out Lewis Terman’s very first Stanford-Binet tests. His scales—now in their fifth edition—state that IQs between 120 and 129 are “superior” while 130-144 is “gifted or very advanced” and 145-160 is “very gifted” or “highly advanced.” How strange… But, the IQ-ist can say that they were just products of their time and that no serious researcher believes such foolish things, that one is “superior” to another on the basis of an IQ score. What about proximal IQs? Lateral IQs? Posterior IQs? Distal IQs? It’s ridiculous to use anatomic terminology (for physical things) and attempt to use them to describe mental “things.”
But, perhaps the most famous hereditarian Arthur Jensen, as I have noted, wrongly stated that heritability estimates can be used to estimate one’s “genetic standing” (Jensen, 1970) and that if we continue our current welfare policies then we are in danger of creating a “genetic underclass” (Jensen, 1969). This, as does the creation of the concept of IQ in the early 1900s, speaks to the hereditarian agenda and the reason for the IQ enterprise as a whole. (See Taylor, 1980 for a wonderful discussion of Jensen’s confusion on the concept of heritability.)
This is no surprise when you understand that IQ tests were created to rank people on a mental hierarchy that reflected the current social hierarchy of the time which would then be used as justification for their spot on the social hierarchy (Mensh and Mensh, 1991). So it is no surprise that anatomic terminology was hijacked in an attempt at forwarding eugenic ideas. But the eugenicists concept of superiority didn’t always pan out the way they wanted it to go, which is evidenced a few decades before the conceptualization of standardized testing.
Galton attempted to show that those with the fastest reaction times were more intelligent, but when he found out that the common man had just as quick of a reaction time, he abandoned this test. Then Cattell came along and showed that no relationship existed between sensory perception and IQ scores. Finally, Binet showed that measures of the skull did not correspond with teacher’s assessment of who is or is not “intelligent.” Then, some decades later, Binet and Simon finally construct a test that discriminates between who they feel is or is not intelligent—which discriminated by social class. This test was finally the “measure” that would differentiate between social classes since it was based on a priori notions of an individual’s place in the social hierarchy (Garrison, 2009: 75). Binet and Simon’s “ideal city” would use test scores as a basis to shuttle people into occupations they “should be” in on the basis of their IQ scores which would show how they would work based on their “aptitudes” (Mensh and Mensh, 1991: 24; Garrison, 2009: 79). Bazemore-James, Shinaorayoon, and Martin (2017) write that:
The difference in racial subgroup mean scores mimics the intended outcomes of the original standardized IQ tests, with exception to Asian Americans. Such tests were invented in the 1910s to demonstrate the superiority of rich, U.S.-born, White men of northern European Descent over non-Whites and recent immigrants (Gersh, 1987). By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p. 166) populationiot “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656).
So, as one can see, this “superiority” was baked-in to IQ tests from the very start and the value-judgments, then, are not in the minds of IQ critics but is inherent in the scores themselves as stated by the pioneers of IQ testing in America and the originators of the concept that would become IQ. Garrison (2009: 79) writes:
With this understanding it is possible to make sense of Binet’s thinking on intelligence tests as group differentiation. That is, the goal was to group children as intelligent and unintelligent, and to grade (value) the various levels of the unintelligent (also see Wolf 1973, 152–154). From the point of view of this goal, it mattered little whether such differences were primarily biological or environmental in origin. The genius of the theory rests in how it postulates one group as “naturally” superior to the other without the assumptions of biology, for reason had already been established as a natural basis for distinction, irrespective of the origin of differences in reasoning ability.
While Binet and Simon were agnostic on the nature-nurture debate, the test items that they most liked were those items that differentiated between social classes the most (which means they were consciously chosen for those goals). But reading about their “ideal city”, we can see that those who have higher test scores are “superior” to those who do not. They were operating under the assumption that they would be organizing society along class lines with the tests being measures of group mental ability. For Binet and Simon, it does not matter whether or not the “intelligence he sought to define” was inherited or acquired, they just assumed that it was a property of groups. So, in effect, “Binet and Simon developed a standard whereby the value of people’s thinking could be judged in a standard way, in a way that corresponded with the exigencies of social reproduction at that time” (Garrison, 2009: 94). The only thing such tests do is reproduce the differences they claim to measure—making it circular (Au, 2009).
But the whole reason why Binet and Simon developed their test was to rank people from “best” to “worst”, “good” to “bad.” But, this does not mean that there is some “thing” inherent in individuals or groups that is being “measured” (Nash, 1990). Thus, since their inception, IQ tests (and by proxy all standardized testing) has pronouncements of such ranking built-in, even if it is not explicitly stated today. Such “measures” are not scientific and psychometrics is then shown for what it really is: “best understood as the development of tools for vertical classification and the production of social value” (Garrison, 2009: 5).
The goal, then, of psychometry is clear. Garrison (2009: 12) writes:
Ranking human worth on the basis of how well one competes in academic contests, with the effect that high ranks are associated with privilege, status, and power, suggests that psychometry is premised, not on knowledge of intellectual or emotional development, but on Anglo-American political ideals of rule by the best (most virtuous) and the brightest (most talented), a “natural aristocracy” in Jeffersonian parlance.
But, such notions of superiority and inferiority, as I have stated back in 2018, are nonsense when taken out of anatomic context:
It should be noted that the terms “superior” and “inferior” are nonsensical, when used outside of their anatomic contexts.
An IQ-ist may exclaim “Are you saying that you can’t say that person A has superior sprinting ability or breath-holding ability!? Are you denying that people are different?!” No, what I’m saying is that it is absurd to take anatomic terminology (physical measures) and attempt to liken it to IQ—this is because nothing physical is being measured, not least because the mental isn’t physical nor reducible to it.
They were presuming to measure one’s “intelligence” and then stating that one has ‘superior’ “intelligence” to another—and that IQ tests were measuring this “superiority”. However, pscyhometrics is not a form of measurement—rankings are not measures.
Knowledge becomes reducible to a score in regard to standardized testing, so students, and in effect their learning and knowledge, are then reduced to their scores on these tests. And so, “such inequalities [with the SAT, which holds for all standardized testing] are structured into the very foundations of standardized test construction itself” (Au, 2009: 64). So what is built into a test can also be built out of it (Richardson, 1990, 2000; Hilliard, 2012).
In first constructing its scales and only then preceding to induce what they ‘measure’ from correlational studies, psychometry has got into the habit of trying to do what cannot be done and doing it the wrong way round anyway. (Nash, 1990: 133)
…psychometry fails to meet its claim of measurement and … its object is not the measurement of nonphysical human attributes, but the marking of some human beings as having more worth or value than other human beings … Psychometry’s claim to measurement serves to veil and justify the fundamentally political act of marking social value, and the role this practice plays in legitimating vast social inequalities. (Garrison, 2009: 30-31)
One of the best examples of a valid measure is temperature—and it has a long history (Chang, 2007). It is valid because there is a well-accepted theory of temperature, what is hot and what is cold. It is a physical property of measure which quantitatively expressed heat and cold. So thermometers were invented to quantify temperature, whereas “IQ” tests were invented to quantify “intelligence.” Those, like Jensen, attempt to make the analogy between temperature and IQ, thermometers and IQ tests. Thermometers, with a high degree of reliability, measure temperature and so do, Jensen, claims, IQ tests. But this, then, presumes that there is something “increasing”—presumably—physiologically in the brain, when there is no such evidence. So for this and many more reasons, the attempted comparison of temperature to intelligence, thermometers to intelligence tests, fails.
So, IQ-ists claim, temperature is measured by thermometers, by definition, therefore intelligence is what IQ tests measure, by definition. But there is a problem with claims such as this. Temperature was verified independently of the measuring device originally used to measure it. Fixed points were first established, and then numerical thermometers could be constructed in which we then find a procedure to assign numbers to degrees of heat between and beyond the fixed points. The thermoscope was what was used for the establishment of fixed points, The thermoscope has no fixed points, so we do not have to circularly rely on the concept of fixed points for reference. And if it goes up and down, we can then rightly infer that the temperature of blood is not stable. But what validates the thermoscope? Human sensation. We can see that when we put our hand into water that is scalding hot, if we put the thermoscope in the same water and note that it rises rapidly. So the thermoscopes agreement with our basic sensations of ‘hot’ and ‘cold’ serve as reliability for the fact that thermoscopes reliably justify (in a non-circular way) that temperature is truly being measured. We are trusting the physical sensation we get from whichever surface we are touching, and from this, we can infer that thermoscopes do indeed validate thermometers making the concept of temperature validated in a non-circular manner and a true measure of hot and cold. (See Chang, 2007 for a full discussion on the measurement of temperature.)
Thermometers could be tested by the criterion of comparability, whereas IQ tests, on the other hand, are “validated” circularly with tests of educational achievement, other IQ tests which were not themselves validated. and job performance (Howe, 1997; Richardson and Norgate, 2015; Richardson, 2017) which makes the “validation” circular since IQ tests and achievement tests are different versions of the same test (Schwartz, 1975).
For example, take intro chemistry. When one takes the intro course, they see how things are measured. Chemists may be measuring in mols, grams, the physical state of a substance, etc. We may measure water displacement, reactions between different chemicals or whatnot. And although chemistry does not reduce to physics, these are all actual physical measures.
But the same cannot be said for IQ (Nash, 1990). We can rightly say that one scores higher than another on an IQ tests but that does not signify that some “thing” is being measured and this is because, to use the temperature example again, there is no independent validation of the “construct.” IQ is a (latent) construct but temperature is a quantitative measure of hot and cold. It really exists, though the same cannot be said about IQ or “intelligence.” The concept of “intelligence” does not refer to something like weight and temperature, for example (Midgley, 2018).
Physical properties are observables. We observe the mercury in a thermometer change based on the temperature inside a building or outside. One may say that we observe “intelligence” daily, but that is NOT a “measure”, it’s just a descriptive claim. Blood pressure is another physical measure. It refers to the pressure in large arteries of the system. This is due to the heart pumping blood. An IQ-ist may say that intelligence is the emergent product of thinking and that this is due to the brain and that correlations between life outcomes, IQ tests and educational achievements then validate the measure. But, as noted above, this is circular. The two examples given—blood pressure and temperature—are real things that are physically measurable, unlike IQ (a latent construct).
It also should be noted that Eysenck claimed that if the measurement of temperature is scientific, then so is the measurement of intelligence. But thermometers are not identical to standardized scales. However, this claim fails, as Nash (1990: 131) notes:
In order to measure temperature three requirements are necessary: (i) a scale, (ii) some thermometric property of an object and, (iii) fixed points of reference. Zero temperature is defined theoretically and successive interval points are fixed by the physical properties of material objects. As Byerly (p. 379) notes, that ‘the length of a column of mercury is a thermometric property presupposes a lawful relationship between the order of length and the temperature order under certain conditions.’ It is precisely this lawful relationship which does not exist between the normative IQ scale and any property of intelligence.
This is where IQ-ists go the most wrong: the emphatically state that their tests are measuring SOMETHING! which is important for life success since they correlate with them. Though, there is no precise specification of the measured object, no object of measurement and no measurement unit, so this “means that the necessary conditions for metrication do not exist [for IQ]” (Nash, 1990: 145).
Since IQ tests have a scoring system, the general impressions is that IQ tests measure intelligence just like thermometers measure temperature—but this is a nonsense claim. IQ is an artifact of the test’s norming population. These points do not reflect any inherent property of individuals, they reflect one’s relation to the society they are in (since all standardized tests are proxies for social class).
One only needs to read into the history of IQ testing—and standardized testing as a whole—to see how and why these tests were first devised. From their beginnings wkth Binet and then over to Terman, Yerkes, and Goddard, the goal has been clear—enact eugenic policies on those deemed “unintelligent” by IQ tests which just so happen to correspond with lower classes in virtue of how the tests were constructed, which goes back originally to Binet and Simon. The history of the concept makes it clear that it’s not based on any kind of measurement theory like blood pressure and temperature. It is based on a priori notions of the structure and distribution of “intelligence” which then reproduces the social structure and “justifies” notions of superiority and inferiority on the basis of “intelligence tests” (Mensh and Mensh, 1991; Au, 2009; Garrison, 2009).
The attempts to hijack anatomic terminology, as I have shown, are nonsense since one doesn’t talk in other anatomic terminology about other kinds of things; the first IQ-ists’ intentions were explicit in what they were attempting to “show” which still holds for all standardized testing today.
Binet, Terman, Yerkes, Goddard and others all had their own priors which then led them to construct tests in such a way that would lead to their desired conclusions. No “property” is being “measured” by these tests, nor can they be used to show one’s “genetic standing” (Jensen, 1970) which implies that one is “genetically superior” (this can be justified by reading Jensen’s interview with American Renaissance and his comments on the “genetic enslavement” of a group of we continued our welfare policy).
Physiological measures, such as blood pressure, and measures of hot and cold, such as temperature, are valid measures and in no way, shape or form—contra Jensen—like the concept of IQ/”intelligence”, which Jensen conflates (Edwards, 1973). Intelligence (which is extra-physical) cannot be measured (see Berka, 1983 and see Nash, 1990: chapter 8 for a discussion of the measurement objection of Berka).
For these reasons, we should not claim that IQ tests ‘measure’ “intelligence”, nor do they measure one’s “genetic standing” or how “superior” one is to another and we should claim that psychometrics is nothing more than a political ring.
In its essence the traditional notion of general intelligence may be a secularised version of the Puritan idea of the soul. … perhaps Galtonian intelligence had its roots in a far older kind of religious thinking. (John White, Personal space: The religious origins of intelligence testing)
In chapter 1 of Alas, Poor Darwin: Arguments Against Evolutionary Psychology, Dorothy Nelkin identifies the link between the founder of sociobiology E.O. Wilson’s religious beliefs and the epiphany he described when he learned of evolution. A Christian author then used Sociobiology to explain and understand the origins of our own sinfulness (Williams, 2000). But there is another hereditarian-type research program that has these kinds of assumptions baked-in—IQ.
Philosopher of education John White has looked into the origins of IQ testing and the Puritan religion. The main link between Puritanism and IQ was that of predestination. The first IQ-ists conceptualized IQ—‘g’ or general intelligence—to be innate, predetermined and hereditary. The predetermination line between both IQ and Puritanism is easy to see: To the Puritans, it was predestined whether or not one went to Hell before they even existed as human beings whereas to the IQ-ists, IQ was predestined, due to genes.
John White (2006: 39) in Intelligence, Destiny, and Education notes the parallel between “salvation and success, damnation and failure”:
Can we usefully compare the saved/damned dichotomy with the perceived contribtion of intelligence or the lack of it to success and failure in life, as conventionally understood? One thing telling against this is that intelligence testers claim to identify via IQ scores a continuous gamut of ability from lowest to highest. On the other hand, most of the pioneers in the field were … especially interested in the far ends of this range — in Galton’s phrase ‘the extreme classes, the best and the worst.’ On the other hand there were the ‘gifted’, ‘the eminent’, ‘those who have honourably succeeded in life’, presumably … the most valuable portion of our human stock. On the other, the ‘feeble-minded’, the ‘cretins’, the ‘refuse’ those seeking to avoid ‘the monotony of daily labor’, democracy’s ballast, not always useless but always a potential liability’.
A Puritan-type parallel can be drawn here—the ‘cretins and ‘feeble-minded’ are ‘the damned’ whereas ‘the extreme classes, the best and worst’ were ‘the saved.’ This kind of parallel can still be seen in modern conceptualizations of the debate and current GWASs—certain people have a certain surfeit of genes that influence intellectual attainment. Contrast with the Puritan “Certain people are chosen before they exist to either be damned or saved.” Certain people are chosen, by random mix-ups of genes during conception, to either be successful or not, and this is predetermined by the genes. So, genetic determinism when speaking of IQ is, in a way, just like Puritan predestination—according to Galton, Burt and other IQ-ists in the 1910s-1920s (ever since Goddard brought back the Binet-Simon Scales from France in 1910).
Some Puritans banned the poor from their communities seeing them as “disruptors to Puritan communities.” Stone (2018: 3-4) in An Invitation to Satan: Puritan Culture and the Salem Witch Trials writes:
The range of Puritan belief in salvation usually extended merely to members of their own communities and other Puritans. They viewed outsiders as suspicious, and people who held different beliefs, creeds, or did things differently were considered dangerous or evil. Because Puritans believed the community shared the consequences of right and wrong, often community actions were taken to atone for the misdeed. As such, they did not hesitate to punish or assault people who they deemed to be transgressors against them and against God’s will. The people who found themselves punished were the poor, and women who stood low on the social ladder. These punishments would range from beatings to public humiliation. Certain crimes, however, were viewed as far worse than others and were considered capital crimes, punishable by death.
Could the Puritan treatment of the poor be due to their beliefs of predestination? Puritan John Winthrop stated in his book A Model of Christian Charity that “some must be rich, some poor, some high and eminent in power and dignity, others mean and in subjection.” This, too, is still around today: IQ sets “upper limits” on one’s “ability ceiling” to achieve X. The poor are those who do not have the ‘right genes’. This is, also, a reason why IQ tests were first introduced in America—to turn away the poor (Gould, 1996; Dolmage, 2018). That one’s ability is predetermined in their genes—that each person has their own ‘ceiling of ability’ that they can reach that is then constrained by their genes is just like the Puritan predestination thesis. But, it is unverifiable and unfalsifiable, so it is not a scientific theory.
To White (2006), the claim that we have this ‘innate capacity’ that is ‘general’ this ‘intelligence’ is wanting. He takes this further, though. In discussing Galton’s and Burt’s claim that there are ‘ability ceilings’—and in discussing a letter he wrote to Burn—White (2006: 16) imagines that we give instruction to all of the twin pairs and that, their scores increase by 15 points. This, then, would have a large effect on the correlation “So it must be an assumption made by the theorist — i.e. Burt — in claiming a correlation of 0.87, that coaching could not successfully improve IQ scores. Burt replied ‘I doubt whether, had we returned a second time, the coaching would have affected our correlations” (White, 2006: 16). Burt seems to be implying that a “ceiling of ability” exists, which he got from his mentor, Galton. White continues:
It would appear that Galton nor Burt have any evidence for their key claim [that ability ceilings exist]. The proposition that, for all of us, there are individually differing ceilings of ability seems to be an assumption behind their position, rather than a conclusion based on telling grounds.
I have discussed elsewhere (White, 1974; 2002a: ch. 5) what could count as evidence for this proposition, and concluded that it is neither verifiable nor falsifiable. The mere fact that a child appears not able to get beyond, say, elementary algebra is not evidence of a ceiling. The failure of this or that variation in teaching approach fares no better, since it is always possible for a tracher to try some different approach to help the learner get over the hurdle. (With some children, so neurologically damaged that they seem incapable of language, it may seem that the point where options run out for the teacher is easier to establish than it is for other children. But the proposition in question is supposed to applu to all of us: we are all said to have our own mental ceiling; and for non-brain-damaged people the existence of a ceiling sems impossible to demonstrate.) It is not falsifiable, since for even the cleverest person in the world, for whom no ceiling has been discovered, it is always possible that it exists somewhere. As an untestable — unverifiable and unfalsifiable — proposition, the claim that we each have a mental ceiling has, if we follow Karl Popper (1963: ch. 1), no role in science. It is like the proposition that God exists or that all historical events are predetermined, both of which are equally untestable. As such, it may play a foundational role, as these two propositions have played, in some ideological belief system of belief, but has no place in empirical science. (White, 2006: 16)
Burt believed that we should use IQ tests to shoe-horn people into what they would be ‘best for’ on the basis of IQ. Indeed, this is one of the main reasons why Binet constructed what would then become the modern IQ test. Binet, influenced by Galton’s (1869) Hereditary Genius, believed that we could identify and help lower-‘ability’ children. Binet envisioned an ‘ideal city’ in which people were pushed to vocations that were based on their ‘IQs.’ Mensh and Mensh (1991: 23) quote Binet on the “universal applications” of his test:
Of what use is a measure of intelligence? Without doubt, one could conceive many possible applications of the process in dreaming of a future where the social sphere would be better organized than ours; where everyone would work according to his known apptitudes in such a way that non particle of psychic force should be lost for society. That would be the ideal city.
So, it seems, Binet wanted to use his test as an early aptitude-type test (like the ones we did in grammar school which ‘showed us’ which vocations we would be ‘good at’ based on a questionnaire). Having people in Binet’s ‘ideal city’ work based on their ‘known aptitudes’ would increase, not decrease, inequality so Binet’s envisioned city is exactly the same as today’s world. Mensh and Mensh (1991: 24) continue:
When Binet asserted that everyone would work to “known” aptitudes, he was saying that the individuals comprising a particular group would work according to the aptitudes that group was “known” to have. When he suggested, for example, that children of lower socioeconomic status are perfectly suited for manual labor, he was simply expressing what elite groups “know,” that is, that they themselves have mental aptitudes, and others have manual ones. It was this elitist belief, this universal rationale for the social status quo, that would be upheld by the universal testing Binet proposed.
White (2006: 42) writes:
Children born with low IQs have been held to have no hope of a professional, well-paid job. If they are capable of joining the workforce at all, they must find their niche as the unskilled workers.
Thus, the similarities between IQ-ist and religious (Puritan) belief comes clear. The parallels between the Puritan concern for salvation and the IQ-ist belief that one’s ‘innate intelligence’ dictated whether or not they would succeed or fail in life (based on their genes); both had thoughts of those lower on the social ladder, their work ethic and morals associated with the reprobate on the one hand and the low IQ people on the other; both groups believed that the family is the ‘mechanism’ by which individuals are ‘saved’ or ‘damned’—presuming salvation is transmitted based one’s family for the Puritans and for the IQ-ists that those with ‘high intelligence’ have children with the same; they both believed that their favored group should be at the top with the best jobs, and best education, while those lower on the social ladder should also get what they accordingly deserve. Galton, Binet, Goddard, Terman, Yerkes, Burt, and others believed that one was endowed with ‘innate general intelligence’ due to genes, according to the current-day IQ-ists who take the same concept.
White drew his parallel between IQ and Puritanism without being aware that one of the first anti-IQ-ists—and American Journalist named Walter Lippman—who also been made in the mid-1920s. (See Mensh and Mensh, 1991 for a discussion of Lippman’s grievances with the IQ-ists). Such a parralel between Puritanism and Galton’s concept of ‘intelligence’ and that of the IQ-ists today. White (2005: 440) notes “that virtually all the major players in the story had Puritan connexions may prove, after all, to be no more than coincidence.” Though, the evidence that White has marshaled in favor of the claim is interesting, as noted many parallels exist. It would be some huge coincidence for there to be all of these parallels without them being causal (from Puritanistic beliefs to hereditarian IQ dogma).
This is similar to what Oyama (1985: 53) notes:
Just as traditonal though placed biological forms in the mind of God, so modern thought finds many ways of endowing the genes with ultimate formative power, a power bestowed by Nature over countless milennia.
But this parallel between Puritanism and hereditarianism doesn’t just go back to the early 20th century—it can still be seen today. The assumption that genes contain a type of ‘information’ before activated by the physiological system for its uses still pervades our thought today, even though many others have been at the forefront to change that kind of thinking (Oyama, 1985, 2000; Jablonka and Lamb, 1995, 2005; Moore, 2002, 2016; Noble, 2006, 2011, 2016).
The links between hereditarianism and religion are compelling; eugenic and Puritan beliefs are similar (Durst, 2017). IQ tests have now been identified as having their origins in eugenic beliefs, along with Puritan-like beliefs have being saved/damned based on something that is predetermined, out of your control just like your genetics. The conception of ‘ability ceilings’—using IQ tests—is not verifiable nor is it falsifiable. Hereditarians believe in ‘ability ceilings’ and claim that genes contain a kind of “blueprint” (which is still held today) which predestines one toward certain dispositions/behaviors/actions. Early IQ-ists believed that one is destined for certain types of jobs based on what is ‘known’ about their group. When Binet wrote that, the gene was yet to be conceptualized, but it has stayed with us ever since.
So not only did the concept of “IQ” emerge due to the ‘need’ to ‘identify’ individuals for their certain ‘aptitudes’ that they would be well-suited for in, for instance, Binet’s ideal city, it also arose from eugenic beliefs and religious (Puritan) thinking. This may be why IQ-ists seem so hysterical—so religious—when talking about IQ and the ‘predictions’ it ‘makes’ (see Nash, 1990).
1. If differences in mental abilities are inherited, and
2. if success requires those abilities, and
3. if earnings and prestige depend on success,
4. then social standing will be based to some extent on inherited differences among people. (Herrnstein, 1971)
Richard Herrnstein’s article I.Q. in The Atlantic (Herrnstein, 1971) caused much controversy (Herrnstein and Murray, 1994: 10). Herrnstein’s syllogism argued that as environments become more similar and if differences in mental abilities are inherited and that success in life requires such abilities and if earning and prestige depends on success which is required by inheritable mental abilities then social standing will be based, “to some extent on inherited differences among people.” Herrnstein does not say this outright in the syllogism, but he is quite obviously talking about genetic inheritance. One can, however, look at the syllogism with an environmental lens, as I will show. Lastly, Herrnstein’s syllogism crumbles since social class is predictive of success in life when both IQ and social class are equated. So since family background and schooling explains the IQ-income relationship (a measure of success) then Herrnstein’s argument falls.
Note that Herrnstein came to measurement due to being a student of William Sheldon’s somatotyping. “Somatotyping lured the impressionable and young Herrnstein into a world promising precision and human predictability based on the measurement of body parts” (Hilliard, 2012: 22).
- If differences in mental abilities are inherited
Premise 1 is simple: “If differences in mental ability are inherited …” Herrnstein is obviously talking about genetic transmission, but we can look at this through a cultural/environmental lens. For example, Berg and Belmont (1990) showed that Jewish children of different socio-cultural backgrounds had different patterns of mental abilities, which were clustered in certain socio-cultural groups (all Jewish), showing that mental abilities are, in large part, culturally derived. Another objection could be that since there are no laws linking psychological/mental states with physical states (the mental is irreducible to the physical—meaning that mental states cannot be transmitted through (physical) genes) then such genetic transmission of psychological/mental traits is impossible. In any case, one can look at cultural transmission of mental abilities and disregard genetic transmission of psychological traits and the argument fails.
We can accept all of the premises of Herrnstein’s syllogism and argue an environmental case, in fact (bracketed words are my additions):
1. If differences in mental abilities are [environmentally] inherited, and
2. if success requires those [environmentally inherited] abilities, and
3. if earnings and prestige depend on [environmentally inherited] success,
4. then social standing will be based to some extent on [enviromnentally] inherited differences among people.
The syllogism hardly changes, but my additions change what Herrnstein was arguing for—environmental, not genetic differences cause success and along with it social standing among groups of people.
The Bell Curve (Herrnstein and Murray, 1994) can, in fact, be seen as an at-length attempt to prove the validity of the syllogism in an empiric matter. Herrnstein and Murray (1994: 105, 108-110) have a full discussion of the syllogism. “As stated, the syllogism is not fearsome” (Herrnstein and Murray, 1994: 105). They go on to state that if intelligence (IQ scores, AFQT scores) is only a bit influenced by genes and if success is only a bit influenced by intelligence then only a small amount of success is inherited (genetically). Note that their measure of “IQ” is the AFQT—which is a measure of acculturated learning, measuring school achievement (Roberts et al, 2000; Cascio and Lewis, 2005).
“How much is IQ a matter of genes?“, Herrnstein and Murray ask. They then discuss the heritability of IQ, relying, of course, on twin studies. They claim that the heritability of IQ is .6 based on the results of many twin studies. But the fatal flaw with twin studies is that the EEA is false and, therefore, genetic conclusions should be dismissed outright (Burt and Simons, 2014, 2015; Joseph, 2015; Joseph et al, 2015; Fosse, Joseph, and Richardson, 2015; Moore and Shenk, 2016). Herrnstein (1971) also discusses twin studies in the context of heritability, attempting to buttress his argument. But if the main vehicle used to show that “intelligence” (whatever that is) is heritable is twin studies, why, then, should we accept the conclusions of twin research if the assumptions that make the foundation of the field are false?
When I – when we – say 60 percent heritability, it’s not 60 percent of the variation. It is 60 percent of the IQ in any given person.” Later, he repeated that for the average person, “60 percent of the intelligence comes from heredity” and added that this was true of the “human species,” missing the point that heritability makes no sense for an individual and that heritability statistics are population-relative.
So Murray used the flawed concept of heritability in the wrong way—hilarious.
So the main point of Herrnstein’s argument is that environments become more uniform for everyone, then the power of heredity will shine through since the environment is uniform—the same—for everyone. But even if we could make the environment “the same”. What does this even mean? How is my environment the same, even if the surroundings are the same, say, if I would react or see something differently than you do on the same thing? The subjectivity of the mental disproves the claim that environments can be “more uniform.” Herrnstein claimed that if no variance in environment exists, then the only thing that can influence success is heredity. This is not wrong, but how would it be possible to equalize environments? Are we supposed to start from square one? Give up the wealth and status of the rich and powerful and “equalize environments” and, according to Herrnstein and the ‘meritocracy’, those who had earnings and prestige, which depended on success which depended on inherited mental abilities would still float to the top.
But what happens when both social class and IQ are equated? What predicts life success? Stephen Ceci reanalyzed the data from Terman’s Termites (the term coined for those in the study) and found something quite different from what Terman had assumed. There were three groups in Terman’s study—group A, B, and C. Groups A and C comprised the top and bottom 20 percent of the full sample in terms of life success. So at the start of the study, all of the children “were about equal in IQ, elementary school grades, and home evaluations” (Ceci, 1996: 82). Depending on the test used, the IQs of the children ranged between 142 to 155, which then decreased by ten points during the second wave due to regression and measurement error. So although group A and C had equivalent IQs, they had starkly different life outcomes. (Group B comprised 60 percent of the sample and enjoyed mediocre life success.)
Ninety-nine percent of the men in the group that had the best professional and personal accomplishments, i.e., group A were individuals who came from professional or business-managerial families that were well educated and wealthy. In contrast, only 17% if the children from group C came from professional and business families, and even these tended to be poorer and less well educated than their group A peers. The men in the two groups present a contrast on all social indicators that were assesssed: group A individuals preferred to play tennis, while group C men preferred to watch football and baseball; as children, the group A men were more likely to collect stamps, shells, and coinds than were the group C men. Not only were the fathers of the group A men better educated than those of group C, but so were their grandfathers. In short, even though the men in group C had equivalent IQs to group A, they did not have equivalent social status. Thus, when IQ is equated and social class is not, it is the latter that seems to be deterministic of professional success. Therefore, Terman’s findings, far from demonstrating that high IQ is associated with real-world success, show that the relationship is more complex and that the social status of these so-called geniuses’ families had a “long reach,” influencing their presonal and professional achievments throughout their adult lives. Thus, the title of Terman’s volumes Genetic studies of Genius, appears to have begged the question of the causation of genius. (Ceci, 1996: 82-83)
Ceci used the Project Talent dataset to analyze the impact of IQ on occupational success. This study, unlike Terman’s, looked at a nationally representative sample of 400,000 high-school students “with both intellectual aptitude and parental social class spanning the entire range of the population” (Ceci, 1996: 85). The students were interviewed in 1960, then about 4,000 were again interviewed in 1974. “For all practical purposes, this subgroup of 4,000 adults represents a stratified national sample of persons in their early 30s” (Ceci, 1996: 86). So Ceci and his co-author, Henderson, ran several regression analyses that involved years of schooling, family and social background and a composite score of intellectual ability based on reasoning, math, and vocabulary. They excluded those who were not working at the time due to being imprisoned, being housewives or still being in school. This then left them with a sample of 2,081 for the analysis.
They looked at IQ as a predictor of variance in adult income in one analysis, which then showed an impact for IQ. “However, when we entered parental social status and years of schooling completed as additional covariates (where parental social status was a standardized score, mean of 100, SD = 10, based on a large number of items having to do with parental income, housing costs, etc.—ranging from low of 58 to high of 135), the effects of IQ as a predictor were totally eliminated” (Ceci, 1996: 86). Social class and education were very strongly related to predictors of adult income. So “this illustrates that the relationship between IQ and adult income is illusory because the more completely specified statistical model demonstrates its lack of predictive power and the real predictive power of social and educational variables” (Ceci, 1996: 86).
The considered high, average, and low IQ groups, about equal size, while examining the regressions of earnings on social class and education within the groups.
Regressions were essentially homogeneous and, contrary to the claims by those working from a meritocratic perspective, the slope for the low IQ group was steepest (see Figure 4.1). There was no limitation imposed by low IQ on the beneficial effects of good social background on earnings and, if anything, there was a trend toward individuals with low IQ actually earning more than those with average IQ (p = .09). So it turns out that although both schooling and parental social class are powerful determinants of future success (which was also true in Terman’s data), IQ adds little to their influence in explaining adult earnings. (Ceci, 1996: 86)
The same was also true for the Project Talent participants who continued school. For each increment of school completed, there was also an effect on their earnings.
Individuals who were in the top quartile of “years of schooling completed” were about 10 times as likely to be receiving incomes in the top quartile of the sample as were those who were in the bottom quartile of “years of schooling completed.” But this relationship does not appear to be due to IQ mediating school attainment or income attainment, because the identical result is found even when IQ is statistically controlled. Interestingly, the groups with the lowest and highest IQs both earned slightly more than average-IQ students when the means were adjusted for social class and education (unadjusted meansat the modal value of social class and education = $9,094, $9,242, and $9,997 for low, average, and hhigh IQ groups, whereas the unadjusted means at this same modal value = $9,972, $9,9292, and $9,9278 for the low, average, and high IQs.) (Perhaps the low IQ students were tracked into plumbing, cement finishing and other well-paying jobs and the high-IQ students were tracked intothe professions, while average IQ students became lower paid teachers. social workers, ministers, etc.) Thus, it appears that the IQ-income relationship is really the result of schooling and family background, and not IQ. (Incidentally, this range in IQs from 70 to 130 and in SES from 58 to 135 covers over 95 percent of the entire population.) (Ceci, 1996: 87-88)
Ceci’s analysis is just like Bowles and Nelson’s (1974) analysis in which they found that earnings at adulthood were more influenced by social status and schooling, not IQ. Bowles and Nelson (1974: 48) write:
Evidently, the genetic inheritance of IQ is not the mechanism which reproduces the structure of social status and economic privilege from generation to generation. Though our estimates provide no alternative explanation, they do suggest that an explanation of intergeneration immobility may well be found in aspects of family life related to socio-economic status and in the effects of socio-economic background operating both directly on economic success, and indirectly via the medium of inequalities in educational attainments.
(Note how this also refutes claims from PumpkinPerson that IQ explains income—clearly, as was shown, family background and schooling explain the IQ-income relationship, not IQ. So the “incredible correlation between IQ and income” is not due to IQ, it is due to environmental factors such as schooling and family background.)
Herrnstein’s syllogism—along with The Bell Curve (an attempt to prove the syllogism)—is therefore refuted. Since social class/family background and schooling explains the IQ-income relationship and not IQ, then Herrnstein’s syllogism crumbles. It was a main premise of The Bell Curve that society is becoming increasingly genetically stratified, with a “cognitive elite”. But Conley and Domingue (2015: 520) found “little evidence for the proposition that we are becoming increasingly genetically stratified.”
IQ testing legitimizes social hierarchies (Chomsky, 1972; Roberts, 2015) and, in Herrnstein’s case, attempted to show that social hierarchies are an inevitability due to the genetic transmission of mental abilities that influence success and income. Such research cannot be socially neutral (Roberts, 2015) and so, this is yet another reason to ban IQ tests, as I have argued. IQ tests are a measure of social class (Ceci, 1996; Richardson, 2002, 2017), and such tests were created to justify existing social hierarchies (Mensh and Mensh, 1991).
Thus, the very purpose of IQ tests was to confirm the current social order as naturally proper. Intelligence tests were not misused to support hereditary theories of social hierarchies; they were perfected in order to support them. The IQ supplied an essential difference among human beings that deliberately reflected racial and class stratifications in order to justify them as natural.9 Research on the genetics of intelligence was far from socially neutral when the very purpose of theorizing the heritability of intelligence was to confirm an unequal social order. (Roberts, 2015: S51)
Herrnstein’s syllogism seems valid, but in actuality, it is not. Herrnstein was implying that genes were the casue of mental abilities and then, eventually, success and prestige. But one can look at Herrnstein’s syllogism from an environmentalist point of view (do note that the hereditarian/environmentalist debate is futile and continues the claim that IQ tests test ‘intelligence’, whatever that is). When matched for IQ—in regard to Terman’s Termites—family background and schooling explained the IQ-income relationship. Further analyses showed that this, again, was the case. Ceci (1996) showed again, replicating Terman’s and Bowles’ and Nelson’s (1974) analyses that social class and schooling, not IQ, explains income’s relationship with IQ.
The conclusion of Herrnstein’s argument can, as I’ve already shown, be an environmental one—through cultural, not genetic, transmission. Such arguments that IQ is ‘genetic’ and, thusly, certain individuals/groups will tend to stay in their social class, as Pinker (2002: 106) states: “Smarter people will tend to float into the higher strata, and their children will tend to stay there.” This, as has been shown, is due to social class, not ‘smarts’ (scores on an IQ test). In any case, this is yet another reason why IQ tests and the research behind them should be banned: IQ tests attempt to justify the current social order as ‘inevitable’ due to genes that influence mental abilities. This claim, though, is false and, therefore—along with the fact that America is not becoming more genetically stratified (Conley and Domigue, 2015)—Herrnstein’s syllogism crumbles. The argument attempts to justify the claim that class has a ‘genetic’ component (as Murray, 2020, attempts to show) but subsequent analyses and arguments have shown that Herrnstein’s argument does not hold.