…but the question “What is intelligence?” has only ever been answered by a shifting social consensus. So perhaps, lke the stuff of dreams and nightmares, it too belongs in the realm of mere appearances. (Goodey, 2011)
IQ groupings/cutoffs are arbitrary. What I mean by “arbitrary” is something without reason or justification; something that is not supported by facts or reasons. What is the reason/justification/facts/reasons for the groupings? The arbitrariness of IQ is also seen historically when we look at how score distributions were changed when different assumptions were had about the “nature” of “intelligence” (e.g., Terman, 1916; Hilliard, 2012). In this article, I will argue that IQ cutoffs are arbitrary with no rational justification for them; they just use them because they get the desired distributions they want.
The arbitrariness of such cutoffs and groupings have been known since the first tests were beginning to be created by American test constructors when Binet and Simon’s test was brought over from France by Goddard in 1910. (See here for a history of the testing movement and how they construct the test.) Terman (1916: 89) warned “That the boundary lines between such groups [feebleminded, dull, superior, genius etc.] are arbitrary.” It is also in this same book—The Measurment of Intelligence—that Terman adjusted the scores of men and women, adding and subtracting items that both men and women get right/wrong the most to even out their scores. This was done by Terman putting items on the test that men were good at (“arithmetical reasoning, giving differences between a president and a king, solving the form board, making change, reversing hands of a clock, finding similiarities, and solving “the induction test.”” [Terman, 1916: 81]) while he also put items on the test that women were good at (“drawing designs from memory, aesthetic comparison, comparing object from memory, answering the “comprehension questions”, repeating digits and sentences, tying a bow-knot, and finding rhymes” [Terman, 1916: 81]). This can also be seen in SAT differences between men and women, as Rosser (1989) points out. It is a matter of item selection/analysis and what the desired distribution of scores you want is.
Such arbitrary IQ cutoffs for these “groups” that Terman used value judgments on reflect the necessity of IQ-ists to attempt to conceptualize “intelligence” as normally distributed, with most falling in the middle and fewer on the tails—where “geniuses” and above are on the right and “mildly impaired and delayed”, per the 5th edition of the Stanford-Binet. But the normal distribution for “IQ” is a myth (Richardson, 2017: chapter 2). The construction of normally distributed IQ tests means that any and all “group distinctions” and “cutoffs” are arbitrary. The test was created first, AND THEN they attempt to deduce what it “measures” on the basis of correlations with other tests and of academic achievement. Further, even showing that there is a relationship between IQ scores and academic achievement is irrelevant, and this is because they are different versions of the same test—meaning that the item content is similar between the tests (Schwartz, 1975; Beaujean et al, 2018). It is a creation of the test’s constructors, not something that we just so happened to find when these tests were created.
Thus, the “bell curve” is an artifact, not a fact, of test construction (Simon, 1997). Items are added and removed on a sample population until the desired distribution is reached. And it is this artificial distribution that all IQ theorizing rests on and it is this artificial distribution that IQ-ists attempt to use for their cutoffs between different “grades” of “intelligence” between people. When it comes to the constructed bell curve, about 2.2% of people fall below 70, so the test was constructed to get this result. So, if the bell curve is an artificial production created by humans, then so is the classification system (“intelligence”). If the classification system is an artificial creation, then so too is the concept of “learning disability.” Bazemore, Shinaprayoon, and Martin write that:
By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p.166) population “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656).
So basically, test constructors had in mind—before they developed the test—who was or was not “intelligent” and then built the test to fit their desires. I can see someone saying “Why does this matter if it happened 100 years ago?” Well, it matters because there is no conceptual support for hereditarian thinking for psychological traits and if there is no support, then the only reason they persist is due to prejudice (Mensh and Mensh, 1991). Furthermore, newer IQ tests use similar items as older ones, and newer tests are “validated” against older tests (like the Stanford-Binet), and so, biases in those tests carry over, without conscious bias toward groups being an ultimate goal (Richardson, 2002: 287,
The arbitrariness of IQ can also be seen with the cutoff for learning disability—a cutoff of 70 or below is seen as the individual needing remedial help and so, the IQ test is a good instrument for these purposes. IQ tests are arbitrary in their use to reflect deficits in everyday functioning (Arvidsson and Granlund, 2016). Cutoffs for learning disabilities have fluctuated between IQ 70-85 over the years. Someone in the US is defined as “learning disabled” if there is a discrepancy between their academic achievement and their “intelligence” (i.e., IQ test score). But, is there any justification as either for a cutoff, where if one were under a certain magic number that they would then be “learning disabled”?
The answer is no, because IQ is irrelevant to the definition of learning disabilities (Siegel, 1988, 1989, 1993). It is absolutely unnnecessary to give IQ tests to identify the learning disabled and the existence of a discrepancy is not a necessary condition (Gunderson and Siegal, 2001). People under IQ 70 frequently do not need specialist services whereas people with IQs over 70 frequently do (Whitaker, 2004). Such tests only see WHAT a person has learned, they DO NOT estimate one’s intellectual “capability”; since IQ tests are tests of a certain type of knowledge, it then follows that exposure to the items on the test and test structure—along with other non-cognitive variables (Richardson, 2002)—explain test score differences and that these differences can be built into and out of the test based on certain a priori assumptions. It further follows that if one has a low score, they were not exposed to the item content and structure of the test and that it is not a “deficit of intelligence” like IQ-ists claim.
Webb and Whitaker (2012) describe the double think employed by many clinical psychologists, privately acknowledging the limitations of IQ tests and the arbitrary nature of the cut-off score of 70 IQ points that defines learning disability, whilst publicly and professionally talking about learning disabilities ‘as if it were a real, naturally occurring condition” (p. 440). Thus the diagnostic procedure involving IQ tests can be seen as a way of passing off culturally specific norms of competence (measured through arcane rituals of assessment) as if they were universal and incontrovertible. (Chinn, 2021: 137-138)
The arbitrariness of IQ 70 as the cutoff for mental disability also rears its head in the courtroom, when defendants are on trial for murder. In Atkins v. Virginia, SCOTUS rules that it was unconstitutional to execute intellectually disabled people. Then in Hall v. Florida, it was ruled that an IQ score by itself was not, by itself, useful in the justification of sentencing; they needed to use other medical/diagnostic criteria. Some people may cry something like “But IQ matters to people it does not matter to only when there is a defendant that has rumblings of being executed but he does not because it is found that he has an IQ below 70!” Nevermind the ethical debate on the death sentence, the arbitrary cutoff of 70 for mental retardation—which, as has been shown, does not hold—has numerous legal and societal consequences for the individual so unluckily deemed “disabled.”
Kanaya and Ceci (2007) argue that when an individual takes a test (whether or not they took it at the beginning or end of the test’s cycle) would have dictated whether or not they qualified for the arbitrary IQ 70 cutoff to not be executed. So the year in which a test is administered is literally a life or death issue. So the year in which a defendant on trial for murder was tested can determine whether or not they are put to death. Prosecutors in many US states have succesfully argued for “ethnic adjustments” for IQ. Sanger (2015) reviews many US cases in which prosecutors have done so. Arguing that “ethnic adjustments” for IQ are “logically, clinically, and unconstitutionally unsound”, he reviews studies that show that abuse, neglect, poverty, and trauma decrease test scores and that the abuse, neglect, poverty, and trauma can be epigenetically passed on through multiple generations. Sanger (2015: 148-149) concludes:
Furthermore, any correlations between the average IQ test scores of racial cohorts (or average scores of cohorts to the overall community norm) are not attributable to race and are heavily influenced by race-neutral environmental factors.397 Those raceneutral environmental factors include the effects of the environment of childhood abuse, stress, poverty, and trauma.398 Such adverse environmental (but race-neutral) factors likely result in phenotypic manifestations, which include epigenetic changes affecting intellectual ability and result in greater numbers of persons with intellectual disabilities within that population.399 The individuals whose intellectual ability is adversely affected by those harmful environmental factors are disproportionately represented by minority groups and among those facing the death penalty in the United States.400
Therefore, the actual recipients of death sentences—the people on death row—are poor, of color, and have disproportionately been subjected to stress, poverty, abuse, and trauma.401 These very people are likely to suffer from actual phenotypic/biological impairment in intellectual functioning that can be passed down by way of programmed epigenetic gene expression through generations.
Quite clearly, this arbitrary IQ 70 cutoff for “intellectual disability” has real-life implications for some people, and in some cases it is a life or death matter based on “ethnic adjustments” and when an individual took a specific test sometime in that test’s lifecycle before renorming. So Sanger showed that it is common that the IQ scores of blacks and “Hispanics” get adjusted upwards routinely, so they can face the death penalty. They push them above the “cutoff” so they can be executed.
In my view, such distinctions between “IQ groups” like that created by Terman—and even continuing into the present day—is an attempt at naturalizing “intellectual disability”; an attempt at saying that these are “natural kinds.” Though “intelligent people and intellectually disabled people are not natural kinds but historically contingent forms of human self-representation and social reciprocity, of relatively recent historical origin” (Goodey, 2011:13). So, intellectual disability, learning disability, intelligence—these are all social constructs (which do not denote natural kinds) and they change with the times.
But Herrnstein and Murray (1994: 1) argued that “the word intelligence describes something real and…it varies from person to person is as universal and ancient as any understanding about the state of being human. Literate cultures everywhere and throughout history have words for saying that some people are smarter than others.” But unfortunately for Herrnstein and Murray, “Intelligence as currently and conventionally understood by psychologists is a brashly modern notion” (Daston, 1992: 211).
The arbitrariness of the designation of “intelligence” means that “IQ/intelligence” is not a “thing”, nor is a “natural kind”, but it is indeed a socially constructed historical notion (Goodey, 2011), as is the concept of “giftedness” (Borland, 1997). The creation of these tests and indeed the label “intellectually disabled” is completely racialized (Chinn, 2021). The arbitrariness and socially constructed notion of what “intelligence” is can be seen just by analyzing the test items—they are heavily classed and racialized, specifically white middle-class. When it comes to the death penalty and IQ, there are very serious issues, as when an individual was given a test may be the deciding factor between life or death, along with the fact that minorities are more likely to be on death row and they are also more likely to experience abuse, trauma, etc which can then be passed on generationally and then also influence test scores—along with test construction, which there is no justification for a certain set of items, just whatever gets the desired distribution is what is “right”; that’s why “IQ” is arbitrary.
We need to dispense with the idea that there is a “thing” called “intelligence” and that it is biological; we need to understand that what we do call “intelligence” is socially constructed as what psychologists all “intelligent” is answering items right and getting a higher score on a test which are heavily biased toward certain races/classes in America. Once we understand that this concept is socially constructed and is not biological, maybe we won’t repeat past mistakes, like sterilizing tens of thousands of people in the name of eugenics.