IQ-ists like to talk about the correlation between “IQ” tests and scholastic achievement tests like the SAT (Scholastic Assessment Test) and how this is one piece of evidence for the ‘validity’ of IQ—the same kinds of score distributions noted on the SAT are also noted in the ‘standard IQ tests.’ However, a confusion rests with the IQ-ists. They, circularly, point to the fact that there is a high correlation between “IQ tests” and the SAT. But what they fail to realize—and what I rarely see discussed—is the process of item selection and removal has a strong impact on scores. Such score differences are, indeed, built-in to the SAT, just as they are for IQ.
The SAT was created in 1924 by eugenicist Carl Brigham—one of the psychologists who also worked on the Army Alpha tests. When he created the test, it was called the Scholastic Aptitude Test. Harvard then used the test as an admissions test and then other Ivy League schools used it as a scholarship test. The SAT was developed directly off of the first IQ tests—so they are intricately linked. First, I will talk about gender differences; second I will talk about race differences. Then I will discuss how and why these differences persist.
Gender differences in the SAT
Differences in IQ were built-out of the test (like with Terman’s Stanford-Binet test), but for the SAT, items and subtests were directly chosen BECAUSE they showed a gap in knowledge between the two groups. Men have always scored higher on the SAT than women since the test’s inception which was due to men’s higher math scores while this was partially off-set by women’s higher verbal scores. However, the ETS then changed the test in the late 80s, stating that there was then “a better balance for the scores between the sexes” (quoted in Rosser, 1989: 38)—which was an eleven-point score advantage for men. They had added more verbal items that favored men, but they did not add more math items which favored women. BUT, interestingly, girls have higher GPAs than boys.
For example, of all of the SAT math questions, the one that produced the largest gender gap was a question in which the win-loss record of a basketball team needed to be computed, which is noted by Rosser (1989: 40-41) in tables 2 and 3:
Interestingly, Rosser (1989: 19) reports that in one county in Maryland, where boys and girls took the same advanced math courses, girls outscored boys academically, but they had SAT-M scores 37-47 points lower than boys. The kinds of items that go onto a test are tried-out on a sample of children, and then the kinds of distributions the constructors want is what they get. For example, by adding/subtracting certain questions and subtests, they can get what they want to see. Rosser (1989) notes that “if the 10 most “pro-boy” items were replaced with items similar to the 10 most “pro-girl” items, boys nationally would outscore girls by about 29 points thus eliminating more than a third of the existing gender gap” (pg 23). Further, for the 1986 SAT, if the ten items that favored boys the most were removed and were replaced by items that favored girls more, then girls would outscore boys by 4 points. In virtue of what was the current test the ‘right’ one, and what justifies the assumptions of the ETS? But in Rosser’s (1989) analysis, “Hispanic” women showed the largest gap while African-American women had the lowest gap when compared with men of their own ‘race.’ See some examples from the Appendixes on some of the items which showed the most extreme sex differences (pg 156-161):
Looking at types of questions such as these—and understanding how the SAT has evolved regarding gender differences since its inception since the mid-1920s—will understand how and why boys and girls score differently. For, if different assumptions were had on the ‘nature’ of ‘cognitive’ differences between boys and girls, more questions favoring girls would be added and then, we would be having a whole different kind of conversation right now.
When it comes to math, though, Niederle and Vesterlund (2010: 140) conclude:
… that competitive pressure may cause gender differences in test scores that exaggerate the underlying gender differences in math skills.
Women are, furthermore, less likely to guess (that is, less likely to risk-take) compared to men. This then translates to the testing environment where a guess is penalized while leaving it blank is not.
The new SAT has disadvantaged female testers; the AEI has stated that such differences have persisted for 50 years. Yes, SAT-M score differences are there, but, as noted above, when children were taught in the same advanced maths classroom, girls outperformed boys but they ended up scoring lower on the SAT-M section than boys—and looking at the SAT-M questions points us to why this paradox occurs. And, to top it all off, the SAT “underpredicts first-year college performance for women and overpredicts for men — thus violating one of the testers’ own, specially designed standard of validity” (Mensh and Mensh, 1991: 71).
Race and the SAT
Now, we turn to race and the SAT. Kidder and Rosner (2002) studied 100000 SAT test-takers in 1989 and also included another database of over 200000 people in New York. They examined around 580 SAT questions between the years 1988-89 and noted the percentage of questions that white, black, and Mexican students answered correctly. If 60 percent of whites answered a question correctly and only 20 percent of blacks did, then the racial impact was 40 percent for that question. For 78 verbal items, whites answered 59.8 percent correctly while blacks answered 46.4 percent correctly, for a racial impact of 13.4 (Kidder and Rosner, 2002: 148).
How are such differences explained? Of the six sections on the SAT, the ETS uses one of the sections for experimental test items. By using whites as a reference, if blacks or another group answers more questions correctly than whites, the item is discarded as invalid. Kidder and Rosner (2002) note that for an item with medium difficulty, whites scored 62 percent correctly while blacks answered 38 percent correctly. But, comparing a question with similar difficulty showed that blacks outscored whites by 8 percent, and 9 percent of women outscored men on the same question. Au (2008: 66) explains:
Test designers determined that this question, where African Americans scored higher than hites (and women higher than men), was psychometrically invalid and was not included in future SATs. The reason for this was that ETS bases its test question selection on statistics established by performance averages on previous tests: The students who statistically on average score higher on the SAT did not answer this question correctly enough of the time, while those who statistically on average score lower on the SAT answered this question correctly too often. By psychometric standards this means that this question was an anomaly and therefore was not considered a “valid” or “reliable” test question for a standardized test such as the SAT. White students outperform black students on the SAT. Higher-scoring students, who tend to be white, correctly answer SAT experiemental test questions at higher rates than typically lower scoring students, who tend to be non-White, ensuring that the test question selection process itself has a relf-reinforcing, racial bias.
Rosner, in his article On White Preferences, explains this well:
I don’t believe that ETS–the Educational Testing Service, the developer of the SAT and the source of this October 1998 test data–intended for the SAT to be a white preference test. However, the “scientific” test construction methods the company uses inexorably lead to this result. Each individual SAT question ETS chooses is required to parallel the outcomes of the test overall. So, if high-scoring test-takers–who are more likely to be white–tend to answer the question correctly in pretesting, it’s a worthy SAT question; if not, it’s thrown out. Race and ethnicity are not considered explicitly, but racially disparate scores drive question selection, which in turn reproduces racially disparate test results in an internally reinforcing cycle.
My considered hypothesis is that every question chosen to appear on every SAT in the past ten years has favored whites over blacks. The same pattern holds true on the LSAT and the other popular admissions tests, since they are developed similarly. The SAT question selection process has never, to my knowledge, been examined from this perspective. And the deeper one looks, the worse things get. For example, while all the questions on the October 1998 SAT favored whites over blacks, approximately one-fifth showed huge, 20 percent gaps favoring whites. Skewed question selection certainly contributes to the large test score disparities between blacks and whites.
So, in order to attempt to rectify this situation, the College Board wants to award out “adversity points”. Their SAT scores would be compared to their parental SES level and adjustments would then be made to their scores. Further, there was discussion on whether or not to give 230 “bonus points” to blacks, 130 to “Hispanics” and penalize Asians by 50 points.
But why do Asians score slightly higher than whites? Simple: they, too, would be in the group of higher-scoring students and, therefore, the test items would—indirectly—be shaped to them. The same holds for ‘Hispanics’ and blacks, as Kidder and Rosner note (regarding test questions), and so, the same would hold for Asians and whites. I think such discussions of “bonus points” and penalization on such tests, while a start, does not get to the assumptions so baked-in to these kinds of tests. Such tests are biased in virtue of the content on them—that is, the item content.
Kidder and Rosner (2002: 210) conclude:
… by reminding readers that, based on our empirical findings and review of the educational measurement literature, the process currently used to construct the SAT, LSAT, GRE, and similar tests unintentionally operates to select questions with larger racial and ethnic disparities (favoring Whites).
While, of course, test-prep can be identified as a factor that causes X group to score higher than Y group, other, more valid hypotheses can be—and have been—considered. Analyzing the items on these tests, we see that they are far from ‘objective’ ‘measures’ of ‘ability.’ The IQ-ist will cry that there is some’thing’ being measured in virtue of the correlation between the SAT and IQ—but, no ‘thing’ is being measured by any of these tests (Nash, 1990); they were created for the sole purpose of justifying and reproducing our current social hierarchies (Mensh and Mensh, 1991; Au, 2009; Garisson, 2009).
One needs only to know how such items are selected for inclusion on these tests. Andrew Strenio writes in his book The Testing Trap (1981: 95):
We look at individual questions and see how many people get them right, and which people get them right. We consciously and deliberately select questions so that the kind of people who scored low on the pretest will score low on subsequent tests. We do the same for the middle or high scorers. We are imposing our will on the outcome.
Only one way, though, exists for test constructors to do so—and this is to presuppose, a priori, who the high, middle and low scorers are and construct the test accordingly.
Take a thought experiment in a world in which our society was reversed. Blacks outscored whites and had better life prospects and the same holds for men and women. The hereditarians in this imagined world would then see that the scores on these tests correlated with smaller brain sizes, a lower amount of neurons, and whatnot. What, then, could the test constructors say justify how women and blacks scored higher than men and whites?
Though these are 35-year-old questions, I fail to see why there would be any changes in 2020—test construction has not changed. Such assumptions are, as argued at-length, built into the test. The outcome of these tests, of course, is determined by the nature of the content of the test—the test’s questions. IQ-ists, then, point to the score differentials between groups (men/women, blacks/whites, etc) and then say “See! There are differences so we are not all-the-same-blank-slates!” But statements like this fail to appreciate how tests are constructed—they believe that these tests are ‘objective “measures”‘ and that it, in a way, shows one’s ‘genetic potential’—and this claim is false.
If the nature of the test’s questions—which items are chosen for inclusion on the test—are determined by test constructors and the experimental questions on the SAT—of which whites are more likely to score higher—then it will indeed follow (and empirical evidence shows this) that what drives such large score disparities between whites and blacks on the SAT is, in fact, biased test questions. The same, too, holds for the differences between men and women. Change the assumptions, change the nature and the outcome of the test, then change what you study to ‘find’ the differences ‘causing’ such test score differences between groups. Hopefully, putting it in this way will show the absurdity of using biased tests to show that ‘biology’ is somehow responsible for score differences between groups.
Such inequalities in standardized test scores like the SAT—just like IQ—then, is structured into the test itself—so, tests like this only reproduce the differences between groups that they claim to ‘measure’—which is a circular claim. Studies like this show the folly of thinking that one group is ‘genetically smarter’ than another—which is what the hereditarians set out to prove. Too bad they have no meausuring unit, object of measurment or measured object.
The East Asian race has been held up as what a high “IQ” population can do and, along with the correlation between IQ and standardized testing, “HBDers” claim that this is proof that East Asians are more “intelligent” than Europeans and Africans. Lynn (2006: 114) states that the average IQ of China is 103. There are many problems with such a claim, though. Not least because of the many reports of Chinese cheating on standardized tests. East Asians are claimed to be “genetically superior” to other races as regards IQ, but this claim fails.
Chinese IQ and cheating
Differences in IQ scores have been noted all over China (Lynn and Cheng, 2013), but generally, the consensus is, as a country, that Chinese IQ is 105 while in Singapore and Hong Kong it is 103 and 107 respectively (Lynn, 2006: 118). To explain the patterns of racial IQ scores, Lynn has proposed the Cold Winters theory (of which a considerable response has been mounted against it) which proposes that the harshness of the environment in the ice age selected-for higher ‘general intelligence’ in East Asian and European populations; such a hypothesis is valid to hereditarians since East Asian (“Mongoloids” as Lynn and Rushton call them) consistently score higher on IQ tests than Europeans (eg Lynn and Dzobion, 1979; Lynn, 1991; Herrnstein and Murray, 1994). In a recent editorial in Psych, Lynn (2019) criticizes this claim from Flynn (2019):
While northern Chinese may have been north of the Himalayas during the last Ice Age, the southern Chinese took a coastal route from Africa to China. They went along the Southern coast of the Middle East, India, and Southeast Asia before they arrived at the Yangzi. They never were subject to extreme cold.
In response, Lynn cites Frost’s (2019) article where he claims that “mean intelligence seems to have risen during recorded history at temperate latitudes in Europe and East Asia.” Just-so storytelling about how and why such “abilities” were “selected-for”, the Chinese score higher on standardized tests than whites and blacks, and this deserves an explanation (the Cold Winters Theory fails; it’s a just-so story).
Before continuing, something must be noted about Lynn and his Chinese IQ data. Lynn ignores numerous studies on Chinese IQ—Lynn would presumably say that he wants to test those in good conditions and so disregards those parts of China with bad environmental conditions (as he did with African IQs). Here is a collection of forty studies that Lynn did not refer to—some showing that, even in regions in China with optimum living conditions, IQs below 90 are found (Qian et al, 2005). How could Lynn miss so many of these studies if he has been reading into the matter and, presumably, keeping up with the latest findings in the field? The only answer to the question is that Richard Lynn is dishonest. (I can see PumpkinPerson claiming that “Lynn is old! It’s hard to search through and read every study!” to defend this.)
Although the Chinese are currently trying to stop cheating on standardized testing (even a possible seven-year prison sentence, if caught cheating, does not deter cheating), cheating on standardized tests in China and by the Chinese in America is rampant. The following is but a sample of what could be found doing a cursory search on the matter.
One of the most popular ways of cheating on standardized tests is to have another person take the exam for you—which is rampant in China. In one story, as reported by The Atlantic, students can hire “gunmen” to sit-in on tests for them, though measures are being taken to fight back against that such as voice recognition and finger-printing. It is well-known that much of the cheating on such tests are being done by international students.
Even on the PISA—which is used as an “IQ” proxy since they correlate highly (.89) (Lynn and Mikk, 2009)—though, there is cheating. For the PISA, each country is to select, at random, 5,000 of their 15-year-old children around the country and administer the PISA—they chose their biggest provinces which are packed with universities. Further, score flucuations attract attention which indicates dishonesty. In 2000, more than 2000 people protested outside of a university to protest a new law which banned cheating on tests.
The rift amounted to this: Metal detectors had been installed in schools to route out students carrying hearing or transmitting devices. More invigilators were hired to monitor the college entrance exam and patrol campus for people transmitting answers to students. Female students were patted down. In response, angry parents and students championed their right to cheat. Not cheating, they said, would put them at a disadvantage in a country where student cheating has become standard practice. “We want fairness. There is no fairness if you do not let us cheat,” they chanted. (Chinese students and their parents fight for the right to cheat)
Surely, with rampant cheating on standardized tests in China (and for Chinese Americans), we can trust the Chinese IQ numbers in light of the news that there is a culture of cheating on tests in China and in America.
“Genetic superiority” and immigrant hyper-selectivity
Strangely, some proponents of the concept of “genetic superiority” and “progressive evolution” still exist. PumpkinPerson is one of those proponents, writing articles with titles like “Genetically superior: Are East Asians more socially intelligent too?, More evidence that East Asians are genetically superior, Oriental populations: Genetically superior, even referring to a fictional character on a TV show as a “genetic superior.” Such fantastical delusions come from Rushton’s ridiculous claim that evolution may be progressive and that some populations are, therefore, “more evolved” than others:
One theoretical possibility is that evolution is progressive and that some populations are more “advanced” than others. Rushton, 1992
Such notions of “evolutionary progress” and “superiority“—even back in my “HBD” days—never passed the smell test to me. In any case, how can East Asians be said to be “genetically superior”? What do “superior genes” or a “superior genome” look like? This has been outright stated by, for example, Lynn (1977) who prolcaims—for the Japanese—that his “findings indicate a genuine superiority of the Japanese in general intelligence.” This claim, though, is refuted by the empirical data—what explains East Asian educational achievement is not “superior genes”, but the belief that education is paramount for upward social mobility, and so, to preempt discrimination, this would then be why East Asians overperform in school (Sue and Okazaki, 1990).
Furthermore, the academic achievement of Asian cannot be reduced to Asian culture—the fact that they are hyper-selected is why social class matters less for Asian Americans (Lee and Zhou, 2017).
These counterfactuals illustrate that there is nothing essential about Chinese or Asian culture that promotes exceptional educational outcomes, but, rather, is the result of a circular process unique to Asian immigrants in the United States. Asian immigrants to the United States are hyper-selected, which results in the transmission and recreation of middle-class specific cultural frames, institutions, and practices, including a strict success frame as well as an ethnic system of supplementary education to support the success frame for the second generation. Moreover, because of the hyper-selectivity of East Asian immigrants and the racialisation of Asians in the United States, stereotypes of Asian-American students are positive, leading to ‘stereotype promise’, which also boosts academic outcomes
Inequalities reproduce at both ends of the educational spectrum. Some students are assumed to be low-achievers and undeserving, tracked into remedial classes, and then ‘prove’ their low achievement. On the other hand, others are assumed to be high-achievers and deserving of meeting their potential (regardless of actual performance); they are tracked into high-level classes, offered help with their coursework, encouraged to set their sights on the most competitive four-year universities, and then rise to the occasion, thus ‘proving’ the initial presumption of their ability. These are the spill-over effects and social psychological consequences of the hyper-selectivity of contemporary Asian immigration to the United States. Combined with the direct effects, these explain why class matters less for Asian-Americans and help to produce exceptional academic outcomes. (Lee and Zhou, 2017)
The success of second-generation Chinese Americans has, too, been held up as more evidence that the Chinese are ‘superior’ in their mental abilities—being deemed ‘model minorities’ in America. However, in Spain, the story is different. First- and second-generation Chinese immigrants score lower than the native Spanish population on standardized tests. The ‘types’ of immigrants that have emigrated has been forwarded as an explanation for why there are differences in attainments of Asian populations. For example, Yiu (2013: 574) writes:
Yet, on the other side of the Atlantic, a strikingly different story about Chinese immigrants and their offspring – a vastly understudied group – emerges. Findings from this study show that Chinese youth in Spain have substantially lower educational ambitions and attainment than youth from every other nationality. This is corroborated by recently published statistics which show that only 20 percent of Chinese youth are enrolled in post-compulsory secondary education, the prerequisite level of schooling for university education, compared to 40 percent of the entire adolescent population and 30 percent of the immigrant youth population in Catalonia, a major immigrant destination in Spain (Generalitat de Catalunyan, 2010).
… but results from this study show that compositional differences across immigrant groups by class origins and education backgrounds, while substantial, do not fully account for why some groups have higher ambitions than others. Moreover, existing studies have pointed out that even among Chinese American youth from humble, working-class origins, their drive for academic success is still strong, most likely due to their parents’ and even co-ethnic communities’ high expectations for them (e.g., Kao, 1995; Louie, 2004; Kasinitz et al., 2008).
The Chinese in Spain believe that education is a closed opportunity and so, they allocate their energy elsewhere—into entrepreneurship (Yiu, 2013). So, instead of Asian parents pushing for education, they push for entrepreneurship. What this shows is that what the Chinese do is based on context and how they perceive how they will be looked at in the society that they emigrate to. US-born Chinese immigrants are shuttled toward higher education whereas in the Netherlands, the second-generation Chinese have lower educational attainment and the differences come down to national context (Noam, 2014). The Chinese in the U.S. are hyper-selected whereas the Chinese in Spain are not and this shows—the Chinese in the US have a high educational attainment whereas they have a low educational attainment in Spain and the Netherlands—in fact, the Chinese in Spain show lower educational attainment than other ethnic groups (Central Americans, Dominicans, Morrocans; Lee and Zhou, 2017: 2236) which, to Americans would be seen as a surprise
Second-generation Chinese parents match their intergenerational transmission of their ethnocultural emphasis on education to the needs of their national surroundings, which, naturally, affects their third-generation children differently. In the U.S., adaptation implies that parents accept the part of their ethnoculture that stresses educational achievement. (Noam, 2014: 53)
So what explains the higher educational attainment of Asians? A mixture of culture and immigrant (hyper-) selectivity along with the belief that education is paramount for upward mobility (Sue and Okazaki, 1990; Hsin and Xie, 2014; Lee and Zhou, 2017) and the fact that what a Chinese immigrant chooses to do is based on national context (Noam, 2014; Lee and Zhou, 2017). Poor Asians do indeed perform better on scholastic achievement tests than poor whites and poor ‘Hispanics’ (Hsin and Xie, 2014; Liu and Xie, 2016). Teachers even favor Asian American students, perceiving them to be brighter than other students. But what are assumed to be cultural values are actually class values which is due to the hyper-selectivity of Asian immigrants to America (Hsin, 2016).
The fact that the term “Mongoloid idiot” was coined for those with Down syndrome because they looked Asian is very telling (see Hilliard, 2012 for discussion). But, the IQ-ists switched from talking about Caucasian superiority to Asian superiority right as the East began their economic boom (Liberman, 2001). The fact that there were disparate “estimates” of skulls in these centuries points to the fact such “scientific observations” are painted with a cultural brush. See eg table 1 from Lieberman (2001):
This tells us, again, that our “scientific objectivity” is clouded by political and economic prejudices of the time. This allows Rushton to proclaim “If my work was motivated by racism, why would I want Asians to have bigger brains than whites?” Indeed, what a good question. The answer is that the whole point of “HBD race realism” is to denigrate blacks, so as long as whites are above blacks in their little self-made “hierarchy” no such problem exists for them (Hilliard, 2012).
Note how Rushton’s long debunked- r/K selection theory (Anderson, 1991; Graves, 2002) used the current hierarchy and placed dozens of traits on a hierarchy where it was M > C > N (Mongoloids, Caucasoids, and Negroids respectively, to use Rushton’s outdated terminology). It is a political statement to put the ‘Mongoloids’ at the top of the racial hierarchy; the goal of ‘HBD’ is to denigrate blacks. But, do note that in the late 19th to early 20th century that East Asians were deemed to have small brains, large penises, and that Japanese men, for instance, would “debauch their [white] female classmates” (quoted in Hilliard, 2012: 91).
The “IQ” of China (along with scores on other standardized tests such as TIMMS and PISA), in light of the scandals occurring regarding standardized testing should be suspect. Richard Lynn has failed to report dozens of studies that show low IQ scores for China, thusly inflating their scores. This is, yet again, another nail in the coffin for the ‘Cold Winter Theory’, since the story is formulated on the basis of cherry-picked IQ scores of children. I have noted that if we have different assumptions that we would have different evolutionary stories. Thus, if the other data were provided and, say, Chinese IQ were found to be lower, we would just create a story to justify the score. This is illustrated wonderfully by Flynn (2019):
I will only say that I am suspicious of these because none of us can go back and really evaluate environment and mating patterns. Given free reign, I can supply an evolutionary scenario for almost any pattern of current IQ scores. If blacks had a mean IQ above other races I could posit something like this: they benefitted from exposure to the most rigorous environmental conditions possible, namely, competition from other people. Thanks to greater population pressures on resources, blacks would have benefitted more from this than any of those who left at least for a long time. Those who left eventually became Europeans and East Asians.
The hereditarians point to the academic success of East Asians in America as proof that IQ tests ‘measure’ intelligence, but East Asians in America are a hyper-selected sample. As the references I have provided show, second-generation Chinese immigrants show lower educational attainments than other ethnies (the opposite is true in America) and this is explained by the context that the immigrant family finds themselves in—where do you allocate your energy? Education or entrepreneurship? Such choices seem to be class-based due to the fact education is championed by the Chinese in America and not in Spain and the Netherlands—then dictate, and they also refute any claims of ‘genetic superiority’—they also refute, for that matter, the claim that genes matter for educational attainment (and therefore IQ)—although we did not need to know this to know that IQ is a bunk ‘measure’.
So if Chinese cheat on standardized tests, then we should not accept their IQ scores; the fact that they, for example, provide non-random children from large provinces speaks to their dishonesty. They are like Lynn, in a way, avoiding the evidence that IQ scores are not what they seem—both Lynn and the Chinese government are dishonest cherry-pickers. The ‘fact’ that East Asian educational attainment can be attributed to genes is false; it is attributed to hyper-selectivity and notions of class and what constitutes ‘success’ in the country they emigrate to—so what they attempt is based on (environmental) context.
In a conversation with an IQ-ist, one may eventually find themselves discussing the concept of “superiority” or “inferiority” as it Regards IQ. The IQ-ist may say that only critics of the concept of IQ place any sort of value-judgments on the number one gets when they take an IQ test. But if the IQ-ist says this, then they are showing their ignorance regarding the history of the concept of IQ. The concept was, in fact, formulated to show who was more “intelligent”—“superior”—and who was less “intelligent”—“inferior.” But here is the thing, though: The terms “superior” and “inferior” are, however, anatomic which shows the folly of the attempted appropriation of the term.
Superiority and inferiority
If one wants to find early IQ-ists talking about superiority and inferiority regarding IQ, they would only need to check out Lewis Terman’s very first Stanford-Binet tests. His scales—now in their fifth edition—state that IQs between 120 and 129 are “superior” while 130-144 is “gifted or very advanced” and 145-160 is “very gifted” or “highly advanced.” How strange… But, the IQ-ist can say that they were just products of their time and that no serious researcher believes such foolish things, that one is “superior” to another on the basis of an IQ score. What about proximal IQs? Lateral IQs? Posterior IQs? Distal IQs? It’s ridiculous to use anatomic terminology (for physical things) and attempt to use them to describe mental “things.”
But, perhaps the most famous hereditarian Arthur Jensen, as I have noted, wrongly stated that heritability estimates can be used to estimate one’s “genetic standing” (Jensen, 1970) and that if we continue our current welfare policies then we are in danger of creating a “genetic underclass” (Jensen, 1969). This, as does the creation of the concept of IQ in the early 1900s, speaks to the hereditarian agenda and the reason for the IQ enterprise as a whole. (See Taylor, 1980 for a wonderful discussion of Jensen’s confusion on the concept of heritability.)
This is no surprise when you understand that IQ tests were created to rank people on a mental hierarchy that reflected the current social hierarchy of the time which would then be used as justification for their spot on the social hierarchy (Mensh and Mensh, 1991). So it is no surprise that anatomic terminology was hijacked in an attempt at forwarding eugenic ideas. But the eugenicists concept of superiority didn’t always pan out the way they wanted it to go, which is evidenced a few decades before the conceptualization of standardized testing.
Galton attempted to show that those with the fastest reaction times were more intelligent, but when he found out that the common man had just as quick of a reaction time, he abandoned this test. Then Cattell came along and showed that no relationship existed between sensory perception and IQ scores. Finally, Binet showed that measures of the skull did not correspond with teacher’s assessment of who is or is not “intelligent.” Then, some decades later, Binet and Simon finally construct a test that discriminates between who they feel is or is not intelligent—which discriminated by social class. This test was finally the “measure” that would differentiate between social classes since it was based on a priori notions of an individual’s place in the social hierarchy (Garrison, 2009: 75). Binet and Simon’s “ideal city” would use test scores as a basis to shuttle people into occupations they “should be” in on the basis of their IQ scores which would show how they would work based on their “aptitudes” (Mensh and Mensh, 1991: 24; Garrison, 2009: 79). Bazemore-James, Shinaorayoon, and Martin (2017) write that:
The difference in racial subgroup mean scores mimics the intended outcomes of the original standardized IQ tests, with exception to Asian Americans. Such tests were invented in the 1910s to demonstrate the superiority of rich, U.S.-born, White men of northern European Descent over non-Whites and recent immigrants (Gersh, 1987). By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p. 166) populationiot “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656).
So, as one can see, this “superiority” was baked-in to IQ tests from the very start and the value-judgments, then, are not in the minds of IQ critics but is inherent in the scores themselves as stated by the pioneers of IQ testing in America and the originators of the concept that would become IQ. Garrison (2009: 79) writes:
With this understanding it is possible to make sense of Binet’s thinking on intelligence tests as group differentiation. That is, the goal was to group children as intelligent and unintelligent, and to grade (value) the various levels of the unintelligent (also see Wolf 1973, 152–154). From the point of view of this goal, it mattered little whether such differences were primarily biological or environmental in origin. The genius of the theory rests in how it postulates one group as “naturally” superior to the other without the assumptions of biology, for reason had already been established as a natural basis for distinction, irrespective of the origin of differences in reasoning ability.
While Binet and Simon were agnostic on the nature-nurture debate, the test items that they most liked were those items that differentiated between social classes the most (which means they were consciously chosen for those goals). But reading about their “ideal city”, we can see that those who have higher test scores are “superior” to those who do not. They were operating under the assumption that they would be organizing society along class lines with the tests being measures of group mental ability. For Binet and Simon, it does not matter whether or not the “intelligence he sought to define” was inherited or acquired, they just assumed that it was a property of groups. So, in effect, “Binet and Simon developed a standard whereby the value of people’s thinking could be judged in a standard way, in a way that corresponded with the exigencies of social reproduction at that time” (Garrison, 2009: 94). The only thing such tests do is reproduce the differences they claim to measure—making it circular (Au, 2009).
But the whole reason why Binet and Simon developed their test was to rank people from “best” to “worst”, “good” to “bad.” But, this does not mean that there is some “thing” inherent in individuals or groups that is being “measured” (Nash, 1990). Thus, since their inception, IQ tests (and by proxy all standardized testing) has pronouncements of such ranking built-in, even if it is not explicitly stated today. Such “measures” are not scientific and psychometrics is then shown for what it really is: “best understood as the development of tools for vertical classification and the production of social value” (Garrison, 2009: 5).
The goal, then, of psychometry is clear. Garrison (2009: 12) writes:
Ranking human worth on the basis of how well one competes in academic contests, with the effect that high ranks are associated with privilege, status, and power, suggests that psychometry is premised, not on knowledge of intellectual or emotional development, but on Anglo-American political ideals of rule by the best (most virtuous) and the brightest (most talented), a “natural aristocracy” in Jeffersonian parlance.
But, such notions of superiority and inferiority, as I have stated back in 2018, are nonsense when taken out of anatomic context:
It should be noted that the terms “superior” and “inferior” are nonsensical, when used outside of their anatomic contexts.
An IQ-ist may exclaim “Are you saying that you can’t say that person A has superior sprinting ability or breath-holding ability!? Are you denying that people are different?!” No, what I’m saying is that it is absurd to take anatomic terminology (physical measures) and attempt to liken it to IQ—this is because nothing physical is being measured, not least because the mental isn’t physical nor reducible to it.
They were presuming to measure one’s “intelligence” and then stating that one has ‘superior’ “intelligence” to another—and that IQ tests were measuring this “superiority”. However, pscyhometrics is not a form of measurement—rankings are not measures.
Knowledge becomes reducible to a score in regard to standardized testing, so students, and in effect their learning and knowledge, are then reduced to their scores on these tests. And so, “such inequalities [with the SAT, which holds for all standardized testing] are structured into the very foundations of standardized test construction itself” (Au, 2009: 64). So what is built into a test can also be built out of it (Richardson, 1990, 2000; Hilliard, 2012).
In first constructing its scales and only then preceding to induce what they ‘measure’ from correlational studies, psychometry has got into the habit of trying to do what cannot be done and doing it the wrong way round anyway. (Nash, 1990: 133)
…psychometry fails to meet its claim of measurement and … its object is not the measurement of nonphysical human attributes, but the marking of some human beings as having more worth or value than other human beings … Psychometry’s claim to measurement serves to veil and justify the fundamentally political act of marking social value, and the role this practice plays in legitimating vast social inequalities. (Garrison, 2009: 30-31)
One of the best examples of a valid measure is temperature—and it has a long history (Chang, 2007). It is valid because there is a well-accepted theory of temperature, what is hot and what is cold. It is a physical property of measure which quantitatively expressed heat and cold. So thermometers were invented to quantify temperature, whereas thermometers were invented to quantify “intelligence.” Those, like Jensen, attempt to make the analogy between temperature and IQ, thermometers and IQ tests. Thermometers, with a high degree of reliability, measure temperature and so do, Jensen, claims, IQ tests.
So, IQ-ists claim, temperature is measured by thermometers, by definition, therefore intelligence is what IQ tests measure, by definition. But there is a problem with claims such as this. Temperature was verified independently of the measuring device originally used to measure it. Fixed points were first established, and then numerical thermometers could be constructed in which we then find a procedure to assign numbers to degrees of heat between and beyond the fixed points. The thermoscope was what was used for the establishment of fixed points, The thermoscope has no fixed points, so we do not have to circularly rely on the concept of fixed points for reference. And if it goes up and down, we can then rightly infer that the temperature of blood is not stable. But what validates the thermoscope? Human sensation. We can see that when we put our hand into water that is scalding hot, if we put the thermoscope in the same water and note that it rises rapidly. So the thermoscopes agreement with our basic sensations of ‘hot’ and ‘cold’ serve as reliability for the fact that thermoscopes reliably justify (in a non-circular way) that temperature is truly being measured. We are trusting the physical sensation we get from whichever surface we are touching, and from this, we can infer that thermoscopes do indeed validate thermometers making the concept of temperature validated in a non-circular manner and a true measure of hot and cold. (See Chang, 2007 for a full discussion on the measurement of temperature.)
Thermometers could be tested by the criterion of comparability, whereas IQ tests, on the other hand, are “validated” circularly with tests of educational achievement, other IQ tests which were not themselves validated. and job performance (Howe, 1997; Richardson and Norgate, 2015; Richardson, 2017) which makes the “validation” circular since IQ tests and achievement tests are different versions of the same test (Schwartz, 1975).
For example, take intro chemistry. When one takes the intro course, they see how things are measured. Chemists may be measuring in mols, grams, the physical state of a substance, etc. We may measure water displacement, reactions between different chemicals or whatnot. And although chemistry does not reduce to physics, these are all actual physical measures.
But the same cannot be said for IQ (Nash, 1990). We can rightly say that one scores higher than another on an IQ tests but that does not signify that some “thing” is being measured and this is because, to use the temperature example again, there is no independent validation of the “construct.” IQ is a (latent) construct but temperature is a quantitative measure of hot and cold. It really exists, though the same cannot be said about IQ or “intelligence.” The concept of “intelligence” does not refer to something like weight and temperature, for example (Midgley, 2018).
Physical properties are observables. We observe the mercury in a thermometer change based on the temperature inside a building or outside. One may say that we observe “intelligence” daily, but that is NOT a “measure”, it’s just a descriptive claim. Blood pressure is another physical measure. It refers to the pressure in large arteries of the system. This is due to the heart pumping blood. An IQ-ist may say that intelligence is the emergent product of thinking and that this is due to the brain and that correlations between life outcomes, IQ tests and educational achievements then validate the measure. But, as noted above, this is circular. The two examples given—blood pressure and temperature—are real things that are physically measurable, unlike IQ (a latent construct).
It also should be noted that Eysenck claimed that if the measurement of temperature is scientific, then so is the measurement of intelligence. But thermometers are not identical to standardized scales. However, this claim fails, as Nash (1990: 131) notes:
In order to measure temperature three requirements are necessary: (i) a scale, (ii) some thermometric property of an object and, (iii) fixed points of reference. Zero temperature is defined theoretically and successive interval points are fixed by the physical properties of material objects. As Byerly (p. 379) notes, that ‘the length of a column of mercury is a thermometric property presupposes a lawful relationship between the order of length and the temperature order under certain conditions.’ It is precisely this lawful relationship which does not exist between the normative IQ scale and any property of intelligence.
This is where IQ-ists go the most wrong: the emphatically state that their tests are measuring SOMETHING! which is important for life success since they correlate with them. Though, there is no precise specification of the measured object, no object of measurement and no measurement unit, so this “means that the necessary conditions for metrication do not exist [for IQ]” (Nash, 1990: 145).
Since IQ tests have a scoring system, the general impressions is that IQ tests measure intelligence just like thermometers measure temperature—but this is a nonsense claim. IQ is an artifact of the test’s norming population. These points do not reflect any inherent property of individuals, they reflect one’s relation to the society they are in (since all standardized tests are proxies for social class).
One only needs to read into the history of IQ testing—and standardized testing as a whole—to see how and why these tests were first devised. From their beginnings wkth Binet and then over to Terman, Yerkes, and Goddard, the goal has been clear—enact eugenic policies on those deemed “unintelligent” by IQ tests which just so happen to correspond with lower classes in virtue of how the tests were constructed, which goes back originally to Binet and Simon. The history of the concept makes it clear that it’s not based on any kind of measurement theory like blood pressure and temperature. It is based on a priori notions of the structure and distribution of “intelligence” which then reproduces the social structure and “justifies” notions of superiority and inferiority on the basis of “intelligence tests” (Mensh and Mensh, 1991; Au, 2009; Garrison, 2009).
The attempts to hijack anatomic terminology, as I have shown, are nonsense since one doesn’t talk in other anatomic terminology about other kinds of things; the first IQ-ists’ intentions were explicit in what they were attempting to “show” which still holds for all standardized testing today.
Binet, Terman, Yerkes, Goddard and others all had their own priors which then led them to construct tests in such a way that would lead to their desired conclusions. No “property” is being “measured” by these tests, nor can they be used to show one’s “genetic standing” (Jensen, 1970) which implies that one is “genetically superior” (this can be justified by reading Jensen’s interview with American Renaissance and his comments on the “genetic enslavement” of a group of we continued our welfare policy).
Physiological measures, such as blood pressure, and measures of hot and cold, such as temperature, are valid measures and in no way, shape or form—contra Jensen—like the concept of IQ/”intelligence”, which Jensen conflates (Edwards, 1973). Intelligence (which is extra-physical) cannot be measured (see Berka, 1983 and see Nash, 1990: chapter 8 for a discussion of the measurement objection of Berka).
For these reasons, we should not claim that IQ tests ‘measure’ “intelligence”, nor do they measure one’s “genetic standing” or how “superior” one is to another and we should claim that psychometrics is nothing more than a political ring.
I was watching the program Diagnose Me on Discovery Health and a woman kept having seizures whenever she heard a certain type of music—“alternative high-pitched female singing”, according to the woman—but her doctors didn’t believe her. So her and her husband began looking for specialists who specialize in hard-to-treat epilepsy. He recommended an endocranial EEG (images of such a surgery can be found below), which meant that the top part of her skull would be removed and electrodes would be placed onto the top of her brain. After the electrodes were placed on the brain. they played the music she said triggered her epilepsy—which was “high-pitched female singing”—and she began to seize. The doctor was shocked and he couldn’t believe what he saw. They ended up finding out that a majority—not all—of her seizing was coming from the right temporal lobe. So her and her husband had a choice—live with the seizures (which she couldn’t because she did not know where she would hear the music) or get part of her brain removed. She chose to have part of her right temporal lobe removed and when it was removed she no longer seized from hearing the music that formerly triggered her symptoms.
The condition is called “musicogenic epilepsy” which is a rare form of what is called “reflex epilepsy”—of which, another similar form involved hitting something which then causes seizing in the patient. (It’s called “reflex epilepsy” since the epileptic events occurs after an event—music, hitting something with your foot, seeing something on the television, etc.) This occurs when certain types of music are heard, certain musical notes can trigger electrical brain activity. The cure is to remove the part of the brain that is affecting the patient. (It is worth noting that many individuals throughout the past 100 years have had large sections of their brains removed and had no loss-of-functioning, staying pretty much the same as they were.) It is important to note that the music is not causing the seizures, it is triggering them—it brings them out. Most of the seizing is localized in the right temporal lobe (Kaplan, 2003), further being localized in Heschl’s gyrus (Nagahama et al, 2017). This has been noted by a few researchers since last century (Shaw and Hill, 1946; Fujinawa and Kawai, 1978) while the Joan of Arc was said to have her perception scrambled while hearing church bells; a Chinese poet stated that he became “absent-minded” and “sick” when hearing the flute-playing from the street vendor (Murray, 2010: 173).
The condition was first noted by a doctor in 1937, with the first known reference to this form of epilepsy being observed in the 1600s (Kaplan, 2003: 465). It affects about 1 in 10,000,000 people (Ellis, 2017). Critical reviews state not to underestimate the power of anti-epileptic drugs in the treatment and management of musicogenic epilepsy (Maguire, 2012), but in the case described above, such drugs did nothing to cure the woman’s seizures that occurred each time she heard a certain kind of music. The effect of music on seizing, it seems, is dichotomous with certain kinds of music either helping manage or causing seizing. The same melody, however, could be played in a different key and not cause seizing (Kaplan and Stoker, 2010) and so, it seems that certain types of sound frequencies influence/screw up the electrical activity in the brain which then leads to seizures of this kind. A specialist in epilepsy explains:
In people with reflex epilepsy, the trigger is extremely specific, and the seizure happens soon thereafter. “It can be a specific song by a particular person or even a specific verse of the song,” says Dr. So, who is a past president of the American Epilepsy Society. For some people, the trigger is a touch or motion. “If patients are interrupted in a particular way, if they are walking along and someone steps in front of them, they may have a seizure,” says Dr. So. In Japan, seizures caused by video games have been reported, he says, but they are highly unusual.
Dr. So evaluated a woman from Tennessee who began having seizures during church when she heard highly emotional hymns. She would blank out and drop her hymn book. At other times, Whitney Houston’s “I Will Always Love You” triggered seizures. The woman had a history of small seizures, but having one while hearing music was a new development. She said the seizures would typically begin with a sense of dread and the feeling that someone was lurking by her side. Dr. So and his Mayo Clinic team attached electrodes to the woman’s scalp to study electrical activity while she listened to different types of music. An electroencephalogram (EEG) showed that slow, emotional songs triggered seizure activity in her brain’s temporal lobe, while faster tunes did not. Dr. So diagnosed the woman with musicogenic epilepsy, a type of reflex epilepsy where seizures are caused by specific music or types of music, and prescribed antiseizure medication. He says he’s had another patient whose seizures were triggered by Rihanna’s “Disturbia” and Pharrell Williams’ “Happy.”
Though musicogenic epilepsy is extremely rare, it may be slightly underreported since many people with the disease may not put two and two together and link their seizing with the type of music or sounds they hear in their day-to-day life. One individual with epilepsy also recounts his experience with this type of rare epilepsy:
… but I still find that certain music, high pitched noise set’s off a kind of aura, I feel spaced out, have intense fear and it sounds almost like water rushing and I hear voices.
One case report exists of a man in which his later seizures were induced by music which prompted stress and a bad mood, implying that the aetiology of musicogenic epilepsy involves an association between the seizing and the patient’s mental state (Cheng, 2016).
We can see how the endocranial EEG looks and how it gets done (WARNING: GRAPHIC) by referring to Nagahama et al (2019):
Intraoperative photographs demonstrating exposure and intracranial electrode placement. A right frontotemporoparietal craniotomy (A) allowed proper exposure for placement of grid, strip, and depth of electrodes (B), including the HG depth electrode. The sylvian fissure is marked with a dashed line. The HG depth electrode and PT depth electrose are marked with X symbols anteriorly and posteriorly, respectively, at their entry points at the cortical surface. Ant = anterior; inf = inferior; post = posterior; sup = superior.
Intraoperative placement of the HG depth electrode. A: The planning view on the frameless stereotactic system (Stealth Navigation, Medtronic) showing the entry point and the trajectory (green circles and dotted lines). B: The similar planning view showing the target and the trajectory. C and D: Intraoperative photographs showing placement of the HG depth electrode. A Stealth Navigus probe was used to select the appropriate trajectory of a guiding tube positioned over the entry point (C). An electrode-guiding cannula was advanced through the tube to the previously determined depth (D). An actual depth electrode was subsequently passed through the cannula, followed by removal of the guiding tube/cannula system. Note the unique anterolateral-to-posteromedial trajectory within the STP for placement of the HG depth electrode.
The average age of onset of musicogenic epilepsy is 28 (Wieser et al, 1997) while the first cases are not reported until around one’s mid-to-late 30s due to the fact that most people are unware that music may be causing their seizures (Pittau et al, 2008; Generalov et al, 2018). This may be due to the fact that seizing may begin several minutes after hearing the music that affects the patient in question (Avanzini, 2003). While the specific tempo and pitch of music seems to have no effect on the beginnings of seizing (Wieser et al, 1997), many patients report that their specific triggers are due to hearing certain lines in songs (Tayah et al, 2006) which implies that it is not the music itself which is causing the seizing, but the emotional response that occurs to the patient after hearing the music and this is supported by the fact that many patients who report such symptoms are interested in music or are musicians themselves (Wieser et al, 1997).
See table 1 from Kaplan (2003: 466) for causes of musicogenic epilepsy in the literature:
As can be seen by the above table, the mood component is related to the musical type; so the music elicits some sort of emotional state in the individual which would, it seems, to be part of the cause which then triggers the seizure—though the music/emotions are not causing the seizing itself, it is bringing them out.
Going to the shops was fraught with danger. Turning on the television was like playing russian roulette. Even getting into a lift was a gamble. For 23 years my life was hugely restricted because I had epileptic fits whenever I heard music.
If it was more than a few notes, a strange humming would start in my head, immediately followed by a seizure. I didn’t fall to the ground and twitch, but would wander around in a daze, my heart racing, my mind a blank. I also experienced hallucinations: people around me appeared microscopic and it felt as if I had been captured by an invisible force field. It was a terrifying experience and I felt drained for hours afterwards. (Experience: Music gave me seizures)
One woman describes her experience with musicogenic epilepsy for The Guardian. She did everything she could think of to stop the music-induced seizures—from sticking cotton balls into her ears to stop hearing sounds, to staying inside of the house (in case a car driving by played the type of music that triggered her seizing), to having a silent wedding with no music. She ended up getting referred to a specialist and she got her brain checked out. Come to find out, she had scarring on her right temporal lobe and so, surgery was done to fix it. She was cured from her condition and she could then attend social functions in which music was played.
The brain has the capacity to produce electricity, and so, in certain individuals with certain things wrong with the structure of their brains (like in their right temporal lobe), if they hear a certain kind of music or tune, they may then begin seizing. While the condition is rare (around 150 cases have been noted), strides are being made in discovering how and why such things occur. The only cure, it seems, is to remove the affected part of the brain—the right temporal lobe in a majority of cases. Such operations, however, do not always have the same debilitating effects (i.e., causing loss of mental capacity). That the brain’s normal functioning can be affected by sound (music) is very interesting and speaks to the fact that our brains are an enigma which is just beginning to be unraveled.
In 1969, Arthur Jensen published a bombshell article in the Harvard Educational Review titled How Much Can We Boost IQ and Scholastic Achievement? in which he argued that compensatory education has failed (e.g., Headstart) and it, therefore, should be abandoned. Jensen was a big proponent against school integration due to his research on IQ (Tucker, 2002). Tucker (1998) also argued, “that the supposed significance of the genetic influence on IQ has invariably reflected a particular ideological view of the purpose of education and its relation to the state that is rooted in conservative political thought.” Such ideological leanings of the IQ-ists have been well-noted (Tucker, 2002; Saini, 2019).(Though it should be noted that school integration didn’t cause any negative effects for whites and had many positive effects for blacks, see Nazaryan and Johnson, 2019). Note how the revival of “racial differences in intelligence” in the mainstream occurred after the Civil Rights Act of 1964. Such ideological leanings have been incipient in ‘intelligence’ testing since its inception.
In any case, what was the ultimate goal of such research into racial/class differences in “intelligence”? The original application of what eventually became tests of “intelligence” were to (1) identify those with learning disabilities and (2) shoe-horn people into jobs “for” them—what Binet called his “ideal city.” When IQ tests were brought to America and translated from French to English by Henry Goddard in 1911 and then again by Lewis Terman in 1916. Goddard was hesitant to force sterilization, but he did believe that those his tests designated as “feeble-minded” should not be allowed to bear children.
Proponents of IQ emphatically state that it’s not a “measure of superiority” and that it’s only the critics who believe that, with no evidence for the claim. However, if one reads Jensen’s earliest writings on IQ, they would see that Jensen did, in fact, believe that heritability could estimate one’s “genetic standing” (Jensen, 1970) and that if we continue our welfare policy that we would lead a group toward “genetic enslavement” (Jensen, 1969). Jensen ran with racists, so there is a possibility that he himself held similar types of views to the people he ran with. The following quotes show Jensen’s eugenic thinking:
“Is there a danger that current welfare policies, unaided by eugenic foresight, could lead to the genetic enslavement of a substantial segment of our population?” – Jensen, 1969: 95, How Much Can We Boost IQ and Scholastic Achievement?
“What the evidence on heritability tells us is that we can, in fact, estimate a person’s genetic standing on intelligence from his score on an IQ test.” – Jensen, 1970, Can We and Should We Study Race Difference?
“… the best thing the black community could do would be to limit the birth-rate among the least-able members, which of course is a eugenic proposal.” – A Conversation with Arthur Jensen, American Reinnasance, 1992
In a review of Raymond Cattell’s Beyondism, Richard Lynn stated:
“What is called for here is not genocide, the killing off of the populations of incompetent cultures. But we do need to think realistically in terms of “phasing out” of such peoples.”
I don’t see how he’s not calling for genocide—genocide is the systemic killing of a specific group of people. Eugenic methods are one way to accomplish this. Richard Lynn’s father was a eugenicist, signing his name to a manifesto which asked the question of how the genetic constitution of the world could be improved, per Lynn (see Interview with a pioneer, American Reinassance). Lynn continues:
My father’s interests did give me an early appreciation of the importance of genetics, although I think I would have adopted this position anyway since the evidence is irrefutable for a strong genetic determination of intelligence and educational attainment and a moderate genetic determination of personality. More importantly, my father served as a role model for scientific achievement and has given me the confidence to advance theories that have sometimes been controversial.
Lynn stated that he is “very pessimistic” about the future of the West, due to the immigration of individuals from low IQ countries who have a higher birthrate than Westerners along with the supposed dysgenic fertility that American white women are facing. (See Lynn, 1996, 2001 for a discussion and look into these views.) In Dysgenics, Lynn (1996: 2) writes that he hopes “To make the case that in the repudiation of eugenics an important truth has been lost, and to rehabilitate the argument that genetic deterioration is occurring in Western populations and in most of the developed world.”
Raymond Cattell was also one who believed that certain people should (voluntarily) be sterilized. He created a religion called “Beyondism” in an attempt to accomplish this goal, his research, in fact, served his eugenic and political beliefs (Tucker, 2009). Compassion was seen as evil to Cattell, which is one major way it strays from other religions. Presumably, one is compassionate to those less fortunate and, therefore, the compassion would help one who Cattell deems “genetically inferior” and so compassion is evil since it leads to the propagation of those Cattell deems less fit. Cattell also stated that, from the perspective of Beyondism, the propagation of ‘genetic failures’ is “positively evil” (Tucker, 2009: 136). He also coined the term ‘genthanasia’ which was “phasing out” a “moribund culture … by educational and birth measures, without a single member dying before his time” (Cattell, quoted in Tucker, 2009: 146).
William Shockley “reasoned” that if the problems that blacks face in America are hereditary, then by attempting to halt the reproduction of blacks, there would be less racism against them. Well if there are few people to be racist against, then there would be less racism against those people. Shocking. Further, Shockley wanted to enstate what he called a “Voluntary Sterilization Bonus Plan” where individuals with IQs below 100 would, with each single point below 100, be given $1000—although the plan was never implemented (Hilliard, 2012: 50). He also wanted to institute a sperm bank of ‘geniuses’ (whatever that means) but, he was never told, women did not want the sperm of Shockley’s short, balding self (he was 5’6” weighing 150 pounds)—although he had a ‘high IQ’ (he was rejected as being one of Terman’s Termites)—they wanted the sperm of taller, good-looking men, regardless of their IQ (Hilliard, 2012: 20).
It is worth noting that Shockley precedes Jensen’s thinking on race and IQ—Jensen was in the audience of one of Shockley’s talks in the late ’60s hearing him talk about racial differences in IQ. Psychology was Jensen’s second choice; his first was to be a symphony conductor. Hilliard (2012: 51) describes this:
“When Shockley addressed a meeting of the Center for Advanced Study in the Behavioral Sciences at Stanford in the late 1960s, one member of the audience drawn to his discourse was Arthur R. Jensen, a psychologist who taught at the University of California–Berkeley. Jensen, who had described himself as a “frustrated symphony conductor,” may have had his own reasons for reverencing Shockley’s every word. The younger psychologist had been forced to abandon a career in music because his own considerable talents in that area nevertheless lacked “soul,” or the emotional intensity needed to succeed in so competitive a profession. He decided on psychology as a second choice, carrying along with him a grudge against those American subcultures perceived as being “more expressive” than the white culture from which he sprang. Jensen received his bachelor’s degree in that field from the University of California– Berkeley in 1945.”
Shockley even disowned his son for dating a Costa Rican woman since it would “deteriorate their white gene pool” while describing his children as a “considerable regression”, even though they had advanced degrees. He blamed this ‘genetic misfortune’ on his wife who did not have as high educational attainment as he did (Hilliard, 2012: 49). This man greatly influenced Jensen—and it seems to show in his first writings on IQ—what eventually kicked off the ‘IQ debate’ (which is frivolous) back in the late 1960s. (James Thompson has said that Shockley wouldn’t talk to anyone if he didn’t know their IQ—presumably, because he did not want to talk to anyone ‘lower’ than he. The idiotic ‘thinking’ of eugenic IQ-ists.
Shockley was involved in a car accident, and received a head injury, with colleagues noting that his views on race and eugenics came about after his car accident (Hilliard, 2012: 48). So, it can rightly be argued that if Shockley would never have gotten into a car accident then he would have never had the views he did on race and IQ, meaning that he would not be speaking for Jensen to be in the audience to then eventually write his infamous 1969 paper. So, the current revival of the race-and-IQ debate can be said to be due to Shockly’s influence on Jensen, which is due (in some way) to head injuries sustained during a car accident.
IQ-ists speak of a “genetic deterioration”, what is termed “dysgenics” (the opposite of eugenics). Professor Seymour Itzkoff published The Decline of Intelligence in America (Itzkoff, 1994), arguing that the decline in our country’s “intelligence” is the cause of our economic and political woes. And, while he does not outright discuss eugenics in the book, he states that higher IQ people are not having children and so the national IQ is decreasing—a dysgenic effect. He is also a recipient of funding from the Pioneer Fund, published in Mankind Quarterly, and was one of the 52 signatories of Mainstream Science on Intelligence (Gottfredson, 1997).
The Decline of Intelligence in America was The Bell Curve before The Bell Curve. Itzkoff argued policy proscriptions, including encouraging certain people to breed and certain people not to breed (eugenics without calling it eugenics). Itzkoff stated that welfare policy is one reason why our “intelligence”—as a nation—has declined (see also Jensen, 1969; Lynn, 2001). Itzkoff (1994: 195) states that “Those at the bottom should be humanely persuaded, with generous gifts if deemed appropriate but for one generation only to refrain from conceiving and having children.” So his views are a mixture of Jensen’s, Shockley’s and others. Itzkoff advocates for both positive and negative eugenics for black Americans. I have not seen any IQ-ist discuss Itzkoff’s writings; I will do so in the future.
Philosopher and IQ-ist Jonathon Anomaly (see Winegard, Winegard, and Anomaly, 2020) has a paper in which he ‘defends eugenics’, even stating what we ‘should’ (cautiously) do about public policy in relation to eugenic ideas. He speaks of “undesirable genetic endowment“, while couching his “moral obligation to produce children with the best chance of the best life” (Anomaly, 2018) “through mechanisms of prenatal screening, enshrined in the principle of procreative beneficence and our responsibility to not pass along an “undesirable genetic endowment” (Love, 2018: 4). (See my arguments to discourage such research here and here.) Presumably, like Itzkoff (1994), such policies will be concentrated on the lower classes, of which minority populations are the majority. Robert Wilson (2019), author of The Eugenic Mind Project writes that Anomaly (2018) fails to argue for eugenics, mischaracterizes eugenics, mischaracterizes the scientific consensus, simplifies and misleads on the history, is careless about race and IQ, appealing for moral principles, and no substance linking demography, eugenics and policy recommendations. Anomaly could hardly contain his negative eugenist views; his views being akin to more traditional, negative forms of eugenics” (Wilson, 2019: 74).
In any case, IQ tests were used as a vehicle for sterilization and barring immigrants into America in the 1920s (Swanson, 1995; Gould, 1996; Wilson, 2017; Dolmage, 2018). In his book The Eugenic Mind Project, Wilson (2017) discusses standpoint eugenics—how eugenic policies affected people and their own personal experiences with eugenic policies. In the book, Wilson argues that, to the eugenicists, there are different ‘sorts’ of people that can be distinguished from others. Wilson (2017: 48) writes:
This was not, however, the way of human betterment favored by the applied science of eugenics and that continues to forms [sic] a key part of The Eugenic Mind. Instead, historically eugenicists typically followed Galton in emphasizing that quality was not equally distributed in the kinds of human populations that are regulated by governmental policies and jurisdictional legislation. More specifically, they thought of such populations as being composed of fundamentally distinct kinds of people, with some kinds being of higher quality than others. Some of these sorts of people were to be improved through eugenic policies that encouraged their own reproduction; others were to be eliminated over generational time. The goal of intergenerational human improvement within the eugenics movement was thus achieved by increasing the proportion of higher-quality people in future generations, and this could be achieved in two ways under eugenic logic. Thus, eugenicists historically advocated ideas, laws, policies, and practices either that aimed to maximize the reproduction of higher-quality people—positive eugenics—or that aimed to minimalize the reproduction of lower-quality people. Or both.
A hallmark of The Eugenic Mind, says Wilson, is the distinction between the ‘fit’ and ‘unfit’. Thus, those deemed ‘unfit’ would be sterilized as they are different ‘kinds’ of people. Eugenics is seen as an applied science and so, it attempts to achieve certain goals—the propagation of the ‘fit’ and elimination of the ‘unfit.’ The first IQ tests were constructed with class, so that the test scores mirrored current racial/class divisions, justifying the social hierarchy (Mensh and Mensh, 1991). Thus, using the ‘science of IQ’, we could then identify ‘feeble-minds’ and therefore select them out of the gene pool. What really would be going on here is not selecting out ‘low IQ people’ but selecting out those of lower classes—one of the main reasons these types of views sprang up. A trait was ‘eugenic’ if it fit “folk knowledge characteristic of people” (Wilson, 2017: 70).
Eugenic-type thinking also had its beginnings in criminality, right when the first IQ tests were being constructed by Binet in 1905 (Kuhar and Fatović-Ferenčić, 2012). Lombroso’s thesis of hereditary criminality also gave American eugenicists the platform for sterilization of criminals (Applegate, 2018: 438). But early eugenicists were more concerned over white female ‘morons’ and white lower-status, promiscuous white women being coerced and segregated in order to prevent them from breeding, it even being suggested by “one or two scientists” that the women should live on a farm performing menial tasks and should be sterilized; the eugenicists wanted state control of heredity (Applegate, 2018: 439). Mexican-American men and women were even sterilized in the 1900s (Lira, 2015). Such beliefs seem to be baked-in from political and social prejudices; not any basis in ‘science.’
Eugenicists wished for state control over the “propagation of the mentally incompetent,” whether through mental illness or disability. Ultimately, these beliefs would lead not only to forced detention and isolation, but also to regular affronts to human life and dignity. (Applegate, 2018: 442)
But Dr. Sullivan, the medical officer of Holloway Prison in The Eugenics Review stated that “Criminals, looked at from the eugenic standpoint, cannot be put into any single category; some of them, probably most of them, are of average stock, and become criminal under the influence of their milieu; they do not directly interest the eugenist” (Sullivan, 1909: 119-120). The “hyperincarceration of blacks” is also argued to be eugenic in nature (Oleson, 2016; Jones and Seabrook, 2017). Such race-based segregation, argues Oleson, significantly depresses the birthrate of affected groups—racialized minorities (socialgroups taken to be races, e.g., ‘Hispanics‘ and blacks). So since minority populations are overrepresented in prison and they are less likely to procreate, it then follows that such arguments like Oleson’s (2016) has some weight to it. So this then would have eugenic effects over generational time. Even today, America is still sterilizing prisoners, so, it seems, the legacy of the 20th century has yet to let up.
… the penal code is a eugenic instrument, although until today, it has been without consciousness of this function. And following the results of eugenic science, it can tomorrow widen or narrow the circle of crimes in the end of conducing to the physical and psychic improvement of the race. (Battaglini, 1914)
Hitler, noticing the American sterilization laws and the Immigration Act of 1924, instituted eugenic policies on this basis—yes, the Nazi eugenic movement was largely taken from the then-existing social policy in America at the time. Pioneer Fund president Henry Laughlin, who used data from IQ tests in front of Congress to bar certain immigrants from the United States (Swanson, 1995; Dolmage, 2018). The Nazis and Americans had extensive contact with each other, while Germany modeled their sterilization law after Henry Laughlin’s laws for sterilization in US states (Cornwell, 2001; Black, 2003; see Allen, 2004; Lelliot, 2004; and Weikart, 2006 for reviews; Wittmann, 2004). But it is worth noting that Hitler was not a Darwinian (Richards, 2012, 2013). Hitler’s laws in the early 1930s on eugenics “may have had some resemblance to the most extreme of American state’s laws” (Wittmann, 2004: 19), since he was observing the eugenic programs implemented by certain states (eugenic laws were never federally mandated).
The IQ-ist thinking that IQ tests ‘measure’ intelligence led to eugenic policies and the sterilization of criminals and those with low IQs. Jensen and Shockley were the forefronts of bringing IQ-ism back into the picture, and they both had eugenic views (Shockley being way more radical than Jensen, but it is clear that Shockley was his influence here and that, without Shockley, IQ-ism may not have had the sway it does today. The IQ-ist ideology that has led to eugenic thinking and social policies, race and ‘intelligence’ has been there since its inception and it is clear that it still exists today (Chitty, 2007). Most of the big-name IQ-ists have, either explicitly or implicitly, stated things that can be construed in a eugenic way, and thusly, the main goal of the IQ-ist program is revealed: limit the birthrates of the lower classes, which are mostly minorities.
The eugenics movement in America—which then influenced Nazi policy—was not built on science, nor was it political (even though it aimed to be “applied science”; Wilson, 2017), it was a political movement which was erected to control social groups thought to be inferior to the higher-ups (Quigley, 1995). The link between eugenic thinking and IQ-ism in its history go hand-in-hand and some IQ-ists, even today, still advocate for such social policy based on the results from IQ tests (Herrnstein and Murray, 1994; Itzkoff, 1994).
Such prescriptions from IQ-ists for what ‘should be’ done with low IQ people speak to their bias in the matter. One of the great IQ-ists, so revered, Arthur Jensen was very implicit in his views in the late 1960s and early 70s about what should be done for the black population in America. His predecessors, too, had the same type of eugenic beliefs which then influenced their thoughts and values on crime and ‘intelligence.’ This game that IQ-ists have been playing has been going on for over 100 years; and with the advent of new genetic technology, the IQ-ists can continue their eugenic games, attempting to prevent ‘certain people’ from having children.
These tests, originally devised to correlate group’s places on the social hierarchy, cannot be ‘used for good’, as the point of its inception was to justify current class hierarchies as ‘genetic and immutable.’ These psychologists and criminologists leave their fields of inquiry and then attempt to influence public policy using clearly biased tests, as the history of the field has shown since its inception in the early 1900s. These are yet more reasons why IQ testing should be banned, as no good can come from believing that a group or individual is ‘less intelligent’ than another. The eugenic thinking of IQ-ists and criminologists feed off each other, with the IQ-ist ideas being the catalyst for the eugenic policies that followed.
In its essence the traditional notion of general intelligence may be a secularised version of the Puritan idea of the soul. … perhaps Galtonian intelligence had its roots in a far older kind of religious thinking. (John White, Personal space: The religious origins of intelligence testing)
In chapter 1 of Alas, Poor Darwin: Arguments Against Evolutionary Psychology, Dorothy Nelkin identifies the link between the founder of sociobiology E.O. Wilson’s religious beliefs and the epiphany he described when he learned of evolution. A Christian author then used Sociobiology to explain and understand the origins of our own sinfulness (Williams, 2000). But there is another hereditarian-type research program that has these kinds of assumptions baked-in—IQ.
Philosopher of education John White has looked into the origins of IQ testing and the Puritan religion. The main link between Puritanism and IQ was that of predestination. The first IQ-ists conceptualized IQ—‘g’ or general intelligence—to be innate, predetermined and hereditary. The predetermination line between both IQ and Puritanism is easy to see: To the Puritans, it was predestined whether or not one went to Hell before they even existed as human beings whereas to the IQ-ists, IQ was predestined, due to genes.
John White (2006: 39) in Intelligence, Destiny, and Education notes the parallel between “salvation and success, damnation and failure”:
Can we usefully compare the saved/damned dichotomy with the perceived contribtion of intelligence or the lack of it to success and failure in life, as conventionally understood? One thing telling against this is that intelligence testers claim to identify via IQ scores a continuous gamut of ability from lowest to highest. On the other hand, most of the pioneers in the field were … especially interested in the far ends of this range — in Galton’s phrase ‘the extreme classes, the best and the worst.’ On the other hand there were the ‘gifted’, ‘the eminent’, ‘those who have honourably succeeded in life’, presumably … the most valuable portion of our human stock. On the other, the ‘feeble-minded’, the ‘cretins’, the ‘refuse’ those seeking to avoid ‘the monotony of daily labor’, democracy’s ballast, not always useless but always a potential liability’.
A Puritan-type parallel can be drawn here—the ‘cretins and ‘feeble-minded’ are ‘the damned’ whereas ‘the extreme classes, the best and worst’ were ‘the saved.’ This kind of parallel can still be seen in modern conceptualizations of the debate and current GWASs—certain people have a certain surfeit of genes that influence intellectual attainment. Contrast with the Puritan “Certain people are chosen before they exist to either be damned or saved.” Certain people are chosen, by random mix-ups of genes during conception, to either be successful or not, and this is predetermined by the genes. So, genetic determinism when speaking of IQ is, in a way, just like Puritan predestination—according to Galton, Burt and other IQ-ists in the 1910s-1920s (ever since Goddard brought back the Binet-Simon Scales from France in 1910).
Some Puritans banned the poor from their communities seeing them as “disruptors to Puritan communities.” Stone (2018: 3-4) in An Invitation to Satan: Puritan Culture and the Salem Witch Trials writes:
The range of Puritan belief in salvation usually extended merely to members of their own communities and other Puritans. They viewed outsiders as suspicious, and people who held different beliefs, creeds, or did things differently were considered dangerous or evil. Because Puritans believed the community shared the consequences of right and wrong, often community actions were taken to atone for the misdeed. As such, they did not hesitate to punish or assault people who they deemed to be transgressors against them and against God’s will. The people who found themselves punished were the poor, and women who stood low on the social ladder. These punishments would range from beatings to public humiliation. Certain crimes, however, were viewed as far worse than others and were considered capital crimes, punishable by death.
Could the Puritan treatment of the poor be due to their beliefs of predestination? Puritan John Winthrop stated in his book A Model of Christian Charity that “some must be rich, some poor, some high and eminent in power and dignity, others mean and in subjection.” This, too, is still around today: IQ sets “upper limits” on one’s “ability ceiling” to achieve X. The poor are those who do not have the ‘right genes’. This is, also, a reason why IQ tests were first introduced in America—to turn away the poor (Gould, 1996; Dolmage, 2018). That one’s ability is predetermined in their genes—that each person has their own ‘ceiling of ability’ that they can reach that is then constrained by their genes is just like the Puritan predestination thesis. But, it is unverifiable and unfalsifiable, so it is not a scientific theory.
To White (2006), the claim that we have this ‘innate capacity’ that is ‘general’ this ‘intelligence’ is wanting. He takes this further, though. In discussing Galton’s and Burt’s claim that there are ‘ability ceilings’—and in discussing a letter he wrote to Burn—White (2006: 16) imagines that we give instruction to all of the twin pairs and that, their scores increase by 15 points. This, then, would have a large effect on the correlation “So it must be an assumption made by the theorist — i.e. Burt — in claiming a correlation of 0.87, that coaching could not successfully improve IQ scores. Burt replied ‘I doubt whether, had we returned a second time, the coaching would have affected our correlations” (White, 2006: 16). Burt seems to be implying that a “ceiling of ability” exists, which he got from his mentor, Galton. White continues:
It would appear that Galton nor Burt have any evidence for their key claim [that ability ceilings exist]. The proposition that, for all of us, there are individually differing ceilings of ability seems to be an assumption behind their position, rather than a conclusion based on telling grounds.
I have discussed elsewhere (White, 1974; 2002a: ch. 5) what could count as evidence for this proposition, and concluded that it is neither verifiable nor falsifiable. The mere fact that a child appears not able to get beyond, say, elementary algebra is not evidence of a ceiling. The failure of this or that variation in teaching approach fares no better, since it is always possible for a tracher to try some different approach to help the learner get over the hurdle. (With some children, so neurologically damaged that they seem incapable of language, it may seem that the point where options run out for the teacher is easier to establish than it is for other children. But the proposition in question is supposed to applu to all of us: we are all said to have our own mental ceiling; and for non-brain-damaged people the existence of a ceiling sems impossible to demonstrate.) It is not falsifiable, since for even the cleverest person in the world, for whom no ceiling has been discovered, it is always possible that it exists somewhere. As an untestable — unverifiable and unfalsifiable — proposition, the claim that we each have a mental ceiling has, if we follow Karl Popper (1963: ch. 1), no role in science. It is like the proposition that God exists or that all historical events are predetermined, both of which are equally untestable. As such, it may play a foundational role, as these two propositions have played, in some ideological belief system of belief, but has no place in empirical science. (White, 2006: 16)
Burt believed that we should use IQ tests to shoe-horn people into what they would be ‘best for’ on the basis of IQ. Indeed, this is one of the main reasons why Binet constructed what would then become the modern IQ test. Binet, influenced by Galton’s (1869) Hereditary Genius, believed that we could identify and help lower-‘ability’ children. Binet envisioned an ‘ideal city’ in which people were pushed to vocations that were based on their ‘IQs.’ Mensh and Mensh (1991: 23) quote Binet on the “universal applications” of his test:
Of what use is a measure of intelligence? Without doubt, one could conceive many possible applications of the process in dreaming of a future where the social sphere would be better organized than ours; where everyone would work according to his known apptitudes in such a way that non particle of psychic force should be lost for society. That would be the ideal city.
So, it seems, Binet wanted to use his test as an early aptitude-type test (like the ones we did in grammar school which ‘showed us’ which vocations we would be ‘good at’ based on a questionnaire). Having people in Binet’s ‘ideal city’ work based on their ‘known aptitudes’ would increase, not decrease, inequality so Binet’s envisioned city is exactly the same as today’s world. Mensh and Mensh (1991: 24) continue:
When Binet asserted that everyone would work to “known” aptitudes, he was saying that the individuals comprising a particular group would work according to the aptitudes that group was “known” to have. When he suggested, for example, that children of lower socioeconomic status are perfectly suited for manual labor, he was simply expressing what elite groups “know,” that is, that they themselves have mental aptitudes, and others have manual ones. It was this elitist belief, this universal rationale for the social status quo, that would be upheld by the universal testing Binet proposed.
White (2006: 42) writes:
Children born with low IQs have been held to have no hope of a professional, well-paid job. If they are capable of joining the workforce at all, they must find their niche as the unskilled workers.
Thus, the similarities between IQ-ist and religious (Puritan) belief comes clear. The parallels between the Puritan concern for salvation and the IQ-ist belief that one’s ‘innate intelligence’ dictated whether or not they would succeed or fail in life (based on their genes); both had thoughts of those lower on the social ladder, their work ethic and morals associated with the reprobate on the one hand and the low IQ people on the other; both groups believed that the family is the ‘mechanism’ by which individuals are ‘saved’ or ‘damned’—presuming salvation is transmitted based one’s family for the Puritans and for the IQ-ists that those with ‘high intelligence’ have children with the same; they both believed that their favored group should be at the top with the best jobs, and best education, while those lower on the social ladder should also get what they accordingly deserve. Galton, Binet, Goddard, Terman, Yerkes, Burt, and others believed that one was endowed with ‘innate general intelligence’ due to genes, according to the current-day IQ-ists who take the same concept.
White drew his parallel between IQ and Puritanism without being aware that one of the first anti-IQ-ists—and American Journalist named Walter Lippman—who also been made in the mid-1920s. (See Mensh and Mensh, 1991 for a discussion of Lippman’s grievances with the IQ-ists). Such a parralel between Puritanism and Galton’s concept of ‘intelligence’ and that of the IQ-ists today. White (2005: 440) notes “that virtually all the major players in the story had Puritan connexions may prove, after all, to be no more than coincidence.” Though, the evidence that White has marshaled in favor of the claim is interesting, as noted many parallels exist. It would be some huge coincidence for there to be all of these parallels without them being causal (from Puritanistic beliefs to hereditarian IQ dogma).
This is similar to what Oyama (1985: 53) notes:
Just as traditonal though placed biological forms in the mind of God, so modern thought finds many ways of endowing the genes with ultimate formative power, a power bestowed by Nature over countless milennia.
But this parallel between Puritanism and hereditarianism doesn’t just go back to the early 20th century—it can still be seen today. The assumption that genes contain a type of ‘information’ before activated by the physiological system for its uses still pervades our thought today, even though many others have been at the forefront to change that kind of thinking (Oyama, 1985, 2000; Jablonka and Lamb, 1995, 2005; Moore, 2002, 2016; Noble, 2006, 2011, 2016).
The links between hereditarianism and religion are compelling; eugenic and Puritan beliefs are similar (Durst, 2017). IQ tests have now been identified as having their origins in eugenic beliefs, along with Puritan-like beliefs have being saved/damned based on something that is predetermined, out of your control just like your genetics. The conception of ‘ability ceilings’—using IQ tests—is not verifiable nor is it falsifiable. Hereditarians believe in ‘ability ceilings’ and claim that genes contain a kind of “blueprint” (which is still held today) which predestines one toward certain dispositions/behaviors/actions. Early IQ-ists believed that one is destined for certain types of jobs based on what is ‘known’ about their group. When Binet wrote that, the gene was yet to be conceptualized, but it has stayed with us ever since.
So not only did the concept of “IQ” emerge due to the ‘need’ to ‘identify’ individuals for their certain ‘aptitudes’ that they would be well-suited for in, for instance, Binet’s ideal city, it also arose from eugenic beliefs and religious (Puritan) thinking. This may be why IQ-ists seem so hysterical—so religious—when talking about IQ and the ‘predictions’ it ‘makes’ (see Nash, 1990).
(Disclaimer: None of this is medical advice.)
Unless you’ve been living under a rock since December 2019, you should have heard the panic that SARS-CoV (which causes COVID-19—coronavirus disease) is causing ever since it emerged in Wuhan, China (Singhal, 2020). This virus spreads really easily—though asymptomatic transmission is thought to be rare, according to the CDC. There is one case report, though, of an infant who showed no signs of COVID-19 but had a high viral load (Kam et al, 2020). In any case, Trump flip-flopped from calling it a ‘hoax’ to taking it seriously, acknowledging the pandemic. “I’ve felt it was a pandemic long before it was called a pandemic“, Trump said. Ah, of course, It must have been just a facade to say it was a hoax. (Pandering to his base?) The ever prescient Trump knows all.
Speaking of prediction, Cheng et al (2007) stated “The presence of a large reservoir of SARS-CoV-like viruses in horseshoe bats, together with the culture of eating exotic mammals in southern China, is a time bomb. The possibility of the reemergence of SARS and other novel viruses from animals or laboratories and therefore the need for preparedness should not be ignored.” Quite the prediction from 13 years ago—implicating southern China’s “culture of eating exotic mammals”, which is more than likely the origin of this current outbreak.
There has been some discussion on whether or not the coronavirus is “as bad” as they’re saying, which has been criticized, for example, for not bringing up the context-dependency of the numbers. The number of cases in the US, though, as of Friday, March 20, 2020, was at 15,219 with 201 deaths. The number of cases keeps increasing daily. As of 3/22/2020, America has had 26,909 cases with 349 deaths while 178 recovered. Ninety-seven percent are in mild condition right now while three percent are in serious condition.
The current recommendations—social distancing, self-quarantining—are what we are doing to fight the virus, but I think we are going to need more drastic measures. Social distancing and self-quarantining will help to slow the spread of the virus, but the virus is still obviously spreading.
All of the talk about what to call it—Wuhan virus, Chinese virus, China virus, coronavirus—is irrelevant. Call it whatever you’d like, just make sure that whomever you’re communicating with knows what you’re talking about. (And, if you want to ensure they do, just call it “coronavirus” as that seems to be the name that has stuck these past few months.) I understand the want to identify where it began and spread from, but of course, others will use it for racial reasons.
The past few days there has been a lot of attention focused on hydroxychloroquine (HCQ) and azithromycin. These are anti-malarial drugs; a trial was done to see if it would have any effect on COVID-19 (Liu et al, 2020).
For HCQ, there is an “expert consensus” on HCQ treatment and COVID-19, and they state:
It recommended chloroquine phosphate tablet, 500mg twice per day for 10 days for patients diagnosed as mild, moderate and severe cases of novel coronavirus pneumonia and without contraindications to chloroquine.
HCQ and chloroquine are cellular autophagy modulators that interfere with the pH-dependent steps of endosome-mediated viral entry and late stages of replication of enveloped viruses such as retroviruses, flaviviruses, and coronaviruses (Savarino and others 2003; Vincent and others 2005).
I don’t know what to make of such results, I am awaiting larger trials on the matter. There is some hope in using anti-malarial drugs in the hopes of curbing the disease.
The Chinese knew that this virus was similar to other SARS strains, their scientists were ordered to stop testing on samples and to destroy the evidence. (See here for a timeline of the case.) The scary thing is that this virus has symptoms similar to the common cold that we get every winter so some may brush it off as ‘just the cold.’ I came down with a cold at the end of January and I was out of commission for the week. Thankfully, it was not COVID-19.
Italy and China had a strong trade relationship, which seems to have cost Italy. Italy has one of the oldest populations in the world. Ninety-nine percent of corona deaths in Italy, though, had other health problems, such as being obese, having hypertension, previous heart problems, etc. Italy began locking down cities as early as two weeks ago, though they have reported a staggering 4,825 deaths. This, though, is to be expected when a quarter of the country is aged 65 and older with multiple comorbidities. So if it is that bad in Italy with a smaller population, what does that mean for the US in the coming weeks?
New York and New Jersey banned gatherings of more than 50 people, dining out, gyms, etc in an effort to curb the transmission of the virus. Then, Friday night at midnight (3/21/2020) only essential businesses were allowed to stay open—essentials include healthcare, infrastructure, food (no dining-in, take-out or delivery only), grocery stores, mail, laundromats, law enforcement, etc. In NJ, all businesses were ordered to close down except things like grocery stores, banks, pet stores, convenience stores, etc. This affected me (gyms closed) and so I cannot work. I preempted this a few weeks ago and found a job in logistics, but I got laid off on Friday due to the shut-downs of nonessential businesses (the shut-downs decreased my work). Now, I’m thinking about hunkering down until at least June. Due to what we know about the social determinants of health (Marmot, 2005; Cockerham, 2007; Barr, 2019) we can expect what is associated with low class (poor health, stress, etc) to increase as well.
This is only going to get worse in the coming weeks. I do see a decreased number of people out on the street, and I am glad that states are taking measures to curb the transmission of the virus, but I still see people not really taking it seriously. From the ads on the radio informing us about what is going on around the country in terms of death rate and transmission rate, they are strongly suggesting for people to stay home and to avoid public transportation. Obviously, in places that are enclosed and many people walk in and out in a timely manner, that is a great place for the virus to spread. ….what if we’re doing what the virus ‘wants’? Don’t worry, the evo-psychos are here to tell us just-so stories.
By this account, COVID-19 is turning out to be a remarkably intelligent evolutionary adversary. By exploiting vulnerabilities in human psychology selectively bred by its pathogen ancestors, it has already shut down many of our schools, crashed our stock market, increased social conflict and xenophobia, reshuffled our migration patterns, and is working to contain us in homogenous spaces where it can keep spreading. We should pause to remark that COVID-19 is extraordinarily successful epidemiologically, precisely because it is not extremely lethal. With its mortality rate of 90%, for example, Ebola is a rather stupid virus: It kills its host — and itself — too quickly to spread far enough to reshape other species’ life-ways to cater to its needs. (The Coronavirus Is Much Worse Than You Think)
Ah, the non-lethality of COVID-19 is to its benefit—it can spread more, it is an “intelligent evolutionary adversary” but it is causing a “moral panic” as well. The damage to our psyche, apparently, is worse than what it could do to our lungs. And while I do agree that this could damage our collective psyches, we don’t need to tell just-so stories about it.
When we come out of this pandemic, I can see us being very cautious as we go back to normal life (in places affected, people are still going out where I live but not as much). Then, hundreds of years later, Evolutionary Psychologists notice how averse people are to go outside. “Why are people so introverted? Why do people avoid others?” They ask. “Why are those who wear masks more attractive than those who don’t wear masks?” They then discover the pandemic of the 2020s which ravaged the world. “Ah! Critics won’t be able to say ‘just-so stories’ now! We know the preceding event—we have a record of it happening!” And so, the evo-psychos celebrate.
In all seriousness, if people do take this seriously, there may be some social/cultural customs changes, including how we greet people.
Cao et al (2020) conclude: “The East Asian populations have much higher AFs [allele frequencies] in the eQTL variants associated with higher ACE2 expression in tissues (Fig. 1c), which may suggest different susceptibility or response to 2019-nCoV/SARS-CoV-2 from different populations under the similar conditions.” Asian men smoke more cigarettes than Asian women (Ma et al, 2002, 2004; Chae, Gavin, and Takeuchi, 2006; Tsai et al, 2008). In your lungs you have what is called “cilia fibers’ and these fibers move debris and microbes out while they also protect the bronchus and trap microorganisms. COVID-19 attacks these same cilia fibers that degrade when one smokes. Therefore, the fact that East Asian populations have higher allele frequencies in ACE2 expression tissues along with higher rates of smoking may be why Asian men seem to be affected more than Asian women. In any case, smokers of any race need to exercise caution.
What if after the pandemic is over life does not go back to normal? What if life during the pandemic becomes the ‘new normal’ when the pandemic is over because everyone is paranoid about contracting the virus again? For introverts, like myself, it’s easy to lock-in. I have hundreds of books to choose from to read, so if I do choose to lock in for 2 months (which I am thinking about), then I won’t really be bored. But my thing is this: what’s the point of locking in when everyone else isn’t, the virus still spreads and when you finally go out the pandemic is still going on? The point of quarantining is understandable—but if everyone doesn’t do it, will it really work? Libertarians be damned, we need the government to step in and do these kinds of things right now. It’s not about the individual, but the public as a whole.
On the other hand, it is thought-provoking to think about the fact that the government is ramping up the drama in the news to see how far they can go with social control. What a perfect way to see how far the public would go if they got “suggestions” from the government. Just like the government is “suggesting” we be inside at 8 pm to mitigate viral transmission, for example, it’s just to see what we would accept and how far they can go until they make it mandatory. It is interesting to think about how all of the toilet paper, hand sanitizer, hand soap, etc are being sold out everywhere.
People in my generation have 9/11 to look back to as the “That’s when the world changed” time. Well, kids alive today (around 7-15 years old) are experiencing their “9/11”, as that’s when the world changed for them. But this coronavirus pandemic is not on a country level—it is on the world level. The whole WORLD is affected. So since our Gregorian calendar is based off the birth of Jesus, I propose the following: change 1-December 2019 AD/CE to BC (before Corona) and anything after December 2019 to AC (after Corona).
I hope that, looking back on the current goings-on now that we are not talking about high death tolls and that we can get this under control. The only course of action (for now) is to attempt to stop the transmission of the virus—which is to stop its transmission from human to human. COVID-19 can be said to largely be a social disease since that is how it is most likely to be transmitted, which is why social distancing is so important. Being social is how the virus spreads, so to stop spreading the virus we need to be anti-social.
If we do not heed these warnings, then we will permanently be living in the Time of Corona. Coronavirus will be dictating what we do and when we do it. No one will want to get sick but no one will also want to take the steps needed in order to eradicate the threat. This thing is just getting started, by the end of the month into the first few weeks of April it is only going to get worse. I hope you all are prepared (have food [meat], water, soap, etc) because we’re in for a hell of a ride. With many businesses closing down in an effort to curb the transmission of COVID-19, many people will be out of jobs—many low-income people.
Charles Murray published his Human Diversity: The Biology of Gender, Race, and Class on 1/28/2020. I have an ongoing thread on Twitter discussing it.
Murray talks of an “orthodoxy” that denies the biology of gender, race, and class. This orthodoxy, Murray says, are social constructivists. Murray is here to set the record straight. I will discuss some of Murray’s other arguments in his book, but for now I will focus on the section on race.
Murray, it seems, has no philosophical grounding for his belief that the clusters identified in these genomic runs are races—and this is clear with his assumptions that groups that appear in these analyses are races. But this assumption is unfounded and Murray’s assumption that the clusters are races without any sound justification for his belief actually undermines his claim that races exist. That is one thing that really jumped out at me as I was reading this section of the book. Murray discusses what geneticists say, but he does not discuss what any philosophers of race say. And that is to his downfall.
Murray discusses the program STRUCTURE, in which geneticists input the number of clusters they want and, when DNA is analyzed (see also Hardimon, 2017: chapter 4). Rosenberg et al (2002) sampled 1056 individuals from 52 different populations using 377 microsatellites. They defined the populations by culture, geography, and language, not skin color or race. When K was set to 5, the clusters represented folk concepts of race, corresponding to the Americas, Europe, East Asia, Oceania, and Africa. (See Minimalist Races Exist and are Biologically Real.) Yes, the number of clusters that come out of STRUCTURE are predetermined by the researchers, but the clusters “are genetically structured … which is to say, meaningfully demarcated solely on the basis of genetic markers” (Hardimon, 2017: 88).
Races as clusters
Murray then discusses Li et al, who set K to 7 and North Africa and the Middle East were new clusters. Murray then provides a graph from Li et al:
So, Murray’s argument seems to be “(1) If clusters that correspond to concepts of race setting K to 5-7 appear in STRUCTURE and cluster analyses, then (2) race exists. (1). Therefore (2).” Murray is missing a few things here, namely conditions (see below) that would place the clusters into the racial categories. His assumption that the clusters are races—although (partly) true—is not bound by any sound reasoning, as can be seen by his partitioning Middle Easterners and North Africans as separate races. Rosenberg et al (2002) showed the Kalash in K=6, are they a race too?
No, they are not. Just because STRUCTURE identifies a population as genetically distinct, it does not entail that the population in question is a race because they do not fit the criteria for racehood. The fact that the clusters correspond to major areas means that the clusters represent continental-level minimalist races so races, therefore, exist (Hardimon, 2017: 85-86). But to be counted as a continental-level minimalist race, the group must fit the following conditions (Hardimon, 2017: 31):
(C1) … a group is distinguished from other groups of human beings by patterns of visible physical features
(C2) [the] members are linked by a common ancestry peculiar to members of that group, and
(C3) [they] originate from a distinctive geographic location
…what it is for a group to be a race is not defined in terms of what it is for an individual to be a member of a race. What it means to be an individual member of a minimalist race is defined in terms of what it is for a group to be a race.
Murray (paraphrased): “Cluster analyses/STRUCTURE spit out these continental microsatellite divisions which correspond to commonsense notions of race.” What is Murray’s logic for assuming that clusters are races? It seems that there is no logic behind it—just “commonsense.” (See also Fish, below.) Due to not finding any arguments for accepting X number of clusters as the races Murray wants, I can only assume that Murray just chose which one agreed with his notions and use for his book. (If I am in error, then if there is an argument in the book then maybe someone can quote it.) What kind of justification is that?
Compared to Hardimon’s argument and definition. Homo sapiens is:
… a subdivision of Homo sapiens—a group of populations that exhibits a distinctive pattern of genetically transmitted phenotypic characters that corresponds to the group’s geographic ancestry and belongs to a biological line of descent initiated by a geographically separated and reproductively isolated founding population. (Hardimon, 2017: 99)
Step 1. Recognize that there are differences in patterns of visible physical features of human beings that correspond to their differences in geographic ancestry.
Step 2. Observe that these patterns are exhibited by groups (that is, real existing groups).
Step 3. Note that the groups that exhibit these patterns of visible physical features correspond to differences in geographical ancestry satisfy the conditions of the minimalist concept of race.
Step 4. Infer that minimalist race exists. (Hardimon, 2017: 69)
While Murray is right that the clusters that correspond to the folk races appear in K = 5, you can clearly see that Murray assumes that ALL clusters would then be races and this is where the philosophical emptiness of Murray’s account comes in. Murray has no criteria for his belief that the clusters are races, commonsense is not good enough.
Murray then lambasts the orthodoxy for claiming that race is a social construct.
Advocates of “race is a social construct” have raised a host of methodological and philosophical issues with the cluster analyses. None of the critical articles has published a cluster analysis that does not show the kind of results I’ve shown.
Murray does not, however, discuss a more critical article of Rosenberg et al (2002)—Mills (2017) – Are Clusters Races? A Discussion of the Rhetorical Appropriation of Rosenberg et al’s “Genetic Structure of Human Populations.” Mills (2017) discusses the views of Neven Sesardic (2010)—philosopher—and Nicholas Wade—science journalist and author of A Troublesome Inheritance (Wade, 2014). Both Wade and Seasardic are what Kaplan and Winther (2014) term “biological racial realists” whereas Rosenberg et al (2002), Spencer (2014), and Hardimon (2017) are bio-genomic/cluster realists. Mills (2017) discusses the “misappropriation” of the bio-genomic cluster concept due to the “structuring of figures [and] particular phrasings” found in Rosenberg et al (2002). Wade and Seasardic shifted from bio-genomic cluster realism to their own hereditarian stance (biological racial realism, Kaplan and Winther, 2014). While this is not a blow to the positions of Hardimon and Spencer, this is a blow to Murray et al’s conception of “race.”
Murray (2020: 144)—rightly—disavows the concept of folk races but wrongly accepting the claim that we dispense with the term “race”:
The orthodoxy is also right in wanting to discard the word race. It’s not just the politically correct who believe that. For example, I have found nothing in the genetics technical literature during the last few decades that uses race except within quotation marks. The reasons are legitimate, not political, and they are both historical and scientific.
Historically, it is incontestably true that the word race has been freighted with cultural baggage that has nothing to do with biological differences. The word carries with it the legacy of nineteenth-century scientific racism combined with Europe’s colonialism and America’s history of slavery and its aftermath.
The combination of historical and scientific reasons makes a compelling case that the word race has outlived its usefulness when discussing genetics. That’s why I adopt contemporary practice in the technical literature, which uses ancestral population or simply population instead of race or ethnicity …
[Murray also writes on pg 166]
The material here does not support the existence of the classically defined races.
(Nevermind the fact that Murray’s and Herrnstein’s The Bell Curve was highly responsible for bringing “scientific racism” into the 21st century—despite protestations to the contrary that his work isn’t “scientifically racist.”)
In any case, we do not need to dispense with the term race. We only need to deflate the term (Hardimon, 2017; see also Spencer, 2014). Rejecting claims from those termed biological racial realists by Kaplan and Winther (2014), both Hardimon (2017) and Spencer (2014; 2019) deflate the concept of race—that is, their concepts only discuss what we can see, not what we can’t. Their concepts are deflationist in that they take the physical differences from the racialist concept (and reject the psychological assumptions). Murray, in fact, is giving into this “orthodoxy” when he says that we should stop using the term “race.” It’s funny, Murray cites Lewontin (an eliminativist about race) but advocates eliminativism of the word but still keeping the underlying “guts” of the concept, if you will.
We should only take the concept of “race” out of our vocabulary if, and only if, our concept does not refer. So for us to take “race” out of our vocabulary it would have to not refer to any thing. But “race” does refer—to proper names for a set of human population groups and to social groups, too. So why should we get rid of the term? There is absolutely no reason to do so. But we should be eliminativist about the racialist concept of race—which needs to exist if Murray’s concept of race holds.
There is, contra Murray, material that corresponds to the “classically defined races.” This can be seen with Murra’s admission that he read the “genetics technical literature”. He didn’t say that he read any philosophy of race on the matter, and it clearly shows.
To quote Hardimon (2017: 97):
Deflationary realism provides a worked-out alternative to racialism—it is a theory that represents race as a genetically grounded, relatively superficial biological reality that is not normatively important in itself. Deflationary realism makes it possible to rethink race. It offers the promise of freeing ourselves, if only imperfectly, from the racialist background conception of race.
Spencer (2014) states that the population clusters found by Rosenberg et al’s (2002) K = 5 run are referents of racial terms used by the US Census. “Race terms” to Spencer (2014: 1025) are “a rigidly designating proper name for a biologically real entity …” Spencer’s (2019b) position is now “radically pluralist.” Spencer (2019a) states that the set of races in OMB race talk (Office of Management and Budget) is one of many forms “race” can take when talking about race in the US; the set of races in OMB race talk is the set of continental human populations; and the continental set of human populations is biologically real. So “race” should be understood as proper names—we should only care if our terms refer or not.
Murray’s philosophy of race is philosophically empty—Murray just uses “commensense” to claim that the clusters found are races, which is clear with his claim that ME/NA people constitute two more races. This is almost better than Rushton’s three-race model but not by much. In fact, Murray’s defense of race seems to be almost just like Jensen’s (1998: 425) definition, which Fish (2002: 6) critiqued:
This is an example of the kind of ethnocentric operational definition described earlier. A fair translation is, “As an American, I know that blacks and whites are races, so even though I can’t find any way of making sense of the biological facts, I’ll assign people to my cultural categories, do my statistical tests, and explain the differences in biological terms.” In essence, the process involves a kind of reasoning by converse. Instead of arguing, “If races exist there are genetic differences between them,” the argument is “Genetic differences between groups exist, therefore the groups are races.”
So, even two decades later, hereditarians are STILL just assuming that race exists WITHOUT arguments and definitions/theories of race. Rushton (1997) did not define “race”, and also just assumed the existence of his three races—Caucasians, Mongoloids, and Negroids; Levin (1997), too, just assumes their existence (Fish, 2002: 5). Lynn (2006: 11) also uses a similar argument to Jensen (1998: 425). Since the concept of race is so important to the hereditarian research paradigm, why have they not operationalized a definition and rely on just assuming that race exists without argument? Murray can now join the list of his colleagues who also assume the existence of race sans definition/theory.
Hardimon’s and Spencer’s concepts get around Fish’s (2002: 6) objection—but Murray’s doesn’t. Murray simply claims that the clusters are races without really thinking about it and providing justification for his claim. On the other hand, philosophers of race (Hardimon, 2017; Spencer, 2014; 2019a, b) have provided sound justification for the belief in race. Murray is not fair to the social constructivist position (great accounts can be found in Zack (2002), Hardimon (2017), Haslanger (2000)). Murray seems to be one of those “Social constructivists say race doesn’t exist!” people, but this is false: Social constructs are real and the social can does have potent biological effects. Social constructivists are realists about race (Spencer, 2012; Kaplan and Winther, 2014; Hardimon, 2017), contra Helmuth Nyborg.
Murray (2020: 17) asks “Why me? I am neither a geneticist nor a neuroscientist. What business do I have writing this book?” If you are reading this book for a fair—philosophical—treatment for race, look to actual philosophers of race and don’t look to Murray et al who do not, as shown, have a definition of race and just assume its existence. Spencer’s Blumenbachian Partitions/Hardimon’s minimalist races are how we should understand race in American society, not philosophically empty accounts.
Murray is right—race exists. Murray is also wrong—his kinds of races do not exist. Murray is right, but he doesn’t give an argument for his belief. His “orthodoxy” is also right about race—since we should accept pluralism about race then there are many different ways of looking at race, what it is, and its influence on society and how society influences it. I would rather be wrong and have an argument for my belief then be right and appeal to “commonsense” without an argument.
Mass shootings occur about every 12.5 days (Meindl and Ivy, 2017) and so, figuring out why this is the case is of utmost importance. There are, of course, complex and multi-factorial reasons why people turn to mass killing, with popular fixes being to change the environment and attempt to identify at-risk individuals before they carry out such heinous acts.
Just-so stories take many forms—why men have beards, human fear of snakes and spiders, why men have bald heads, why humans have big brains, why certain genes are in different populations in different frequencies, etc. The trait/genes that influence the trait are said to be fitness-enhancing and therefore selected-for—they become “naturally selected” (see Fodor and Piattelli-Palmarini, 2010, 2011) and fixated in that species. Mass shootings are becoming more frequent and deadlier in America; is there any evolutionary rationale behind this? Don’t worry, the just-so storytellers are here to tell us why these sorts of actions are and have been prevalent in society.
The end result is a highly provocative interpretation of combining theories of human nature and evolutionary psychology. Additionally, community development and connectedness are described as evolved behaviors that help provide opportunities for individuals to engage and support each other in a conflicted society. In sum, this manuscript helps piece together centuries old [sic] theories describing human nature with current views addressing natural selection and adaptive behaviors that helped shape the good that we know in each person as well as the potential destruction that we seem to tragically be witnessing with increasing frequency. At the time of this manuscript publication yet another mass campus shooting had occurred at Umpqua Community College (near Roseburg, Orgeon). (Hoffman, 2015: 3-4, Philosophical Foundations of Evolutionary Psychology)
It seems that Hoffman (a psychology professor at Metropolitan State University) is implying that such actions like “mass campus shootings” are a part of “the potential destruction that we seem to tragically be witnessing with increasing frequency.” Hoffman (2015: 175) speaks of “genetic skills” and that just “because an individual has the genetic skills to be an athlete, artist, or auto-mechanic does not mean that ipso facto it will happen—what actually defines the outcomes of a specific human behavior is a very complex social and environmental process.” So, at least, Hoffman seems to understand (and endorse) the GxE/DST view.
There are more formal presentations that such actions are “based on an evolutionary compulsion to take action against a perceived threat to their status as males, which may pose a serious threat to their viability as mates and to their ultimate survival” (Muscoreil, 2015). (Let’s hope they stayed an undergrad.)
Muscoreil (2015) claims that such are due to status-seeking—to take action against other males that they perceive to be a threat to their social status and reproductive success. Of course, killing off the competition would have that individual’s genes spread through the population more, therefore increasing the frequency of those traits in the population if they happen to have more children (so the just-so story goes). Though, the storytellers are hopeful: Muscoreil (2015) proposes to be ready to work toward “peace and healing” whereas Hoffman (2015: 176) proposes that we should work on cooperation, which was evolutionarily adaptive, and so “communities not only have the capacity but also more importantly an obligation to create specific environments that stimulates and nurture cooperative relationships, such as the development of community service activities and civic engagement opportunities.” So it seems that these authors aren’t so doom-and-gloom—through community outreach, we can come together and attempt to decrease these kinds of crimes that have been on the rise since 1999.
There is a paraphilia called “hybristophilia” in which a woman gets sexually aroused at the thought of being cheated on, or even the thought of her partner committing heinous crimes such as rape and murder. Some women are even attracted to serial killers, and they tend to be in their 40s and 50s—through the killer, it is said, the woman gains a sense of status in her head. Two kinds of women who fall for serial killers exist: those who think they can “change” the killer and those who are attracted through news headlines on the killer’s actions. While others say lonely women who want attention will write serial killers since they are more likely to write back. This is, clearly, pointing to an innate evolutionary drive for women to be attracted to the killer, so they can feel more protected—even if they are not physically with them.
Of course, if there were no guns there would still (theoretically) be mass killings, as anything and everything can be used as a weapon to cause harm to another (which is why this is about mass killings and not mass murders). So, evolutionary psychologists note that a certain action is still prevalent (the fact that autogenic massacre exists) and attempt to explain it in a way only they can—through the tried and tested just-so story method.
Klinesmith et al (2006) showed that men who interacted with a gun showed subsequent increases in testosterone levels compared to those who tinkered with the board game Mouse Trap. Those who had access to the gun showed greater increases in testosterone and thus added more hot sauce to the water. They conclude that “exposure to guns may increase later interpersonal aggression, but further demonstrates that, at least for males, it does so in part by increasing testosterone levels” (Klinesmith et al, 2006: 570). And so, due to this, guns may increase aggressive behavior due to an increase in testosterone. This study has the usual pitfalls—small sample (n=30), college-age (younger means more aggressive, on average) and so cannot be generalized. But the idea is out there: Holding a gun has a man feel more powerful and dominant, and so, their testosterone levels increase BUT! the testosterone increase would not be driving the cause. It has even been said that mass shooters are “low dominance losers”. Lack of attention would lead to decreased social status which means fewer women would be willing to talk with the guy which makes the guy think that his access to women is decreasing due to his lack of social status and, when he gets access to a weapon, his testosterone increases as he can then give in to his evolutionary compulsions and therefore increase his virality and access to mates.
Elliot Rodger is one of these types. Killing six people because he was shunned and had no social life—he wanted to punish the women who rejected him and the men who he envied. Being inter-racial himself, he described his hatred for inter-racial couples and couples in general (he himself was half white and half Asian), the fact that he could never get a girlfriend, and the conflicts that occurred in his family. Of course, all of his life experiences coalesced into the actions he decided to undertake that day—and to the evolutionary psychologist, it is all understandable through an evolutionary lens. He could not get women and was jealous of the men who could get women, so why not attempt to take some of them out and get his “retributive justice” he so yearned for? Evolutionary psychology explains his and similar actions. (VanGeem, 2009 espouses similar ideas.)
These ideas on evolutionary psychology and mass killings can even be extended to terrorism and mass killings—as I myself (stupidly) have written on (see Rushton, 2005). He uses his (refuted) genetic similarity theory (GST; an extension of kin selection and Dawkins’ selfish gene theory) to show why suicide bombers are motivated to kill.
These political factors play an indispensable role but from an evolutionary perspective aspiring to universality, people have evolved a ‘cognitive module’ for altruistic self-sacrifice that benefits their gene pool. In an ultimate rather than proximate sense, suicide bombing can be viewed as a strategy to increase inclusive fitness. (Rushton, 2005: 502)
“Genes … typically only whisper their wishes rather than shout” (Rushton, 2005: 502). Note the Dawkins-like wording. Rushton, wisely, cautions in his conclusion that his genetic similarity theory is only one of many reasons why things like this occur and that causation is complex and multi-factorial—right, nice cover. To Rushton, the suicide bomber is taking an action in order to ensure that those more closely relate to them (their family and their ethnic group as a whole) survive and propagate more of their genes, increasing the selfishness and ethnocentrism of that ethnic group. Note how Rushton, despite his protestations to the contrary, is trying to ‘rationalize’ racism and ethnocentric behavior as being ‘in the genes’ with the selfish genes having the ‘vehicle’ behave more selfishly in order to increase the frequencies of the copies of itself that are found in co-ethnics. (See Noble, 2011 for a refutation of Dawkins’ theory.) Ethnic nationalism, genocide, and genocide are the “dark side to altruism”, states Rushton (2005: 504), and this altruistic behavior, in principle, could show why Arabs commit their suicide bombings and similar attacks.
Jetter and Walker (2018) show that “news coverage is suggested to cause approximately three mass shootings in the following week, which would explain 58 percent of all mass shootings in our sample” looking at ABC News Tonight coverage between the time period of Januray 1, 2013 to June 23, 2016. Others have also suggested that such a “media contagion” effect exists in regard to mass shootings (Towers et al, 2015; Johnston and Joy, 2016; Meindl and Ivy, 2017; Lee, 2018; Pescara-Kovach et al, 2019). The idea of such a “media contagion” makes sense: If one is already harboring ideas of attempting a mass killing, seeing them occur in their own country by people around their own ages may have them think “I can do that, too.” And so, this could be one of the reasons for the increase in such attacks—the sensationalist media constantly covering the events and blasting the name of the perpetrator all over the airwaves.
Though, contrary to popular belief, the race of a mass shooter is not more likely to be white—he is more likely to be Asian. Between 1982 and 2013, out of the last 20 mass killings of the time period, 45 percent (9) were comitted by non-whites. Asians, being 6 percent of the US population, were 15 percent of the killers within the last 31 years. So, regarding population size, Asians commit the most mass shootings, not whites. (See also Mass Shootings by Race; they have up-to-date numbers.) Chen et al (2015) showed that:
being exposed to a Korean American rampage shooter in the media and perceiving race as a cause for this violence was positively associated with negative beliefs and social distance toward Korean American men. Whereas prompting White-respondents to subtype the Korean-exemplar helped White-respondents adjust their negative beliefs about Korean American men according to their attribution of the shooting to mental illness, it did not eliminate the effect of racial attribution on negative beliefs and social distance
Mass shooters who were Asian or another non-white minority got a lot more attention and receieved longer stories than those of white shooters. “While the two most covered shootings are perpetrated by whites (Sandy Hook and the 2011 shooting of Congresswoman Gabrielle Giffords in Tucson, Arizona), both an Asian and Middle Eastern shooter garnered considerable attention in The Times” (Schildkraut, Elsass, and Meredith, 2016).
Although Hoffman (2015) and Muscoreil (2015) state that we should look to the community to ensure that individuals are not socially isolated so that these kinds of things may be prevented, there is still no way to predict who a mass shooter would be. Others propose that, due to the high increase of school shootings, steps should be taken to evaluate the mental health of at-risk students (Paoloni, 2015; see alsoKnoll and Annas, 2016.) and attempt to stop these kinds of things before they happen. Mental illness cannot predict mass shootings (Leshner, 2019), but “evolutionary psychologists” cannot either. We did not need the just-so storytelling of Rushton, Hoffman, and Muscoreil to explain why mass killers still exist—the solutions to such killings put forth by Hoffman and Muscoreil are fine, but we did not need just-so story ‘reasoning’ to come to that conclusion.
The history of standardized testing—including IQ testing—has a contentious history. What causes score distributions between groups of people? I stated at least four reasons why there is a test gap:
(1) Differences in genes cause differences in IQ scores;
(2) Differences in environment cause differences in IQ scores;
(3) A combination of genes and environment cause differences in IQ scores; and
(4) Differences in IQ scores are built into the test based on the test constructors’ prior biases.
I hold to (4) since, as I have noted, the hereditarian-environmentalist debate is frivolous. There is, as I have been saying for years now, no agreed-upon definition of ‘intelligence’, since there are such disparate answers from the ‘experts’ (Lanz, 2000; Richardson, 2002).
For the lack of such a definition only reflects the fact that there is no worked-out theory of intelligence. Having a successful definition of intelligence without a corresponding theory would be like having a building without foundations. This lack of theory is also responsible for the lack of some principled regimentation of the very many uses the word ‘intelligence’ and its cognates are put to. Tao many questions concerning intelligence are still open, too many answers controversial. Consider a few examples of rather basic questions: Does ‘intelligence’ name some entity which underlies and explains certain classes of performances1, or is the word ‘intelligence’ only sort of a shorthand-description for ‘being good at a couple of tasks or tests’ (typically those used in IQ tests)? In other words: Is ‘intelligence’ primarily a descriptive or also an explanatorily useful term? Is there really something like intelligence or are there only different individual abilities (compare Deese 1993)? Or should we turn our backs on the noun ‘intelligence’ and focus on the adverb ‘intelligently’, used to characterize certain classes of behaviors? (Lanz, 2000: 20)
Nash (1990: 133-4) writes:
Always since there are just a series of tasks of one sort or another on which performance can be ranked and correlated with other performances. Some performances are defined as ‘cognitive performances’ and other performances as ‘attainment performances’ on essentially arbitrary, common sense grounds. Then, since ‘cognitive performances’ require ‘ability’ they are said to measure that ‘ability’. And, obviously, the more ‘cognitive ability’ an individual posesses the more that individual can acheive. These procedures can provide no evidence that IQ is or can be measured, and it is rather besides the point to look for any, since that IQ is a metric property is a fundamental assumption of IQ theory. It is imposible that any ‘evidence’ could be produced by such procedures. A standardised test score (whether on tests designated as IQ or attainment tests) obtained by an individual indicates the relative standing of that individual. A score lies within the top ten perent or bottom half, or whatever, of those gained by the standardisation group. None of this demonstrates measurement of any property. People may be rank ordered by their telephone numbers but that would not indicate measurement of anything. IQ theory must demonstrate not that it has ranked people according to some performance (that requires no demonstration) but that they are ranked according to some real property revealed by that performance. If the test is an IQ test the property is IQ — by definition — and there can in consequence be no evidence dependent on measurement procedures for hypothesising its existence. The question is one of theory and meaning rather than one of technique. It is impossible to provide a satisfactory, that is non-circular, definition of the supposed ‘general cognitive ability’ IQ tests attempt to measure and without that definition IQ theory fails to meet the minimal conditions of measurement.
These is similar to Mary Midgley’s critique of ‘intelligence’ in her last book before her death What Is Philosophy For? (Midgley, 2018). The ‘definitions’ of ‘intelligence’ and, along with it, its ‘measurement’ have never been satisfactory. Haier (2016: 24) refers to Gottfredson’s ‘definition’ of ‘intelligence, stating that ‘intelligence’ is a ‘general mental ability.’ But if that is the case, that it is a ‘general mental ability’ (g) then ‘intelligence’ does not exist because ‘g’ does not exist as a property in the brain. Lanz’s (2000) critique is also like Howe’s (1988; 1997) that ‘intelligence’ is a descriptive, not explanatory, term.
Now that the concept of ‘intelligence’ has been covered, let’s turn to race and test bias.
Test items are biased when they have different psychological meanings across cultures (He and van de Vijver 2012: 7). If they have different meanings across cultures, then the tests will not reflect the same ‘ability’ between cultures. Being exposed to the knowledge—and correct usage of it—on a test is imperative for performance. For if one is not exposed to the content on the test, how are they expected to do well if they do not know the content? Indeed, there is much evidence that minority groups are not acculturated to the items on the test (Manly et al, 1997; Ryan et al, 2005; Boone et al, 2007). This is what IQ tests measure: acculturation to the the tests’ constructors, school cirriculum and school teachers—aspects of white, middle-class culture (Richardson, 1998). Ryan et al (2005) found that reading and and educational level, not race or ethnicity, was related to worse performance on psychological tests.
Serpell et al (2006) took 149 white and black fourth-graders and randomly assigned them to ethnically homogeneous groups of three, working on a motion task on a computer. Both blacks and whites learned equally well, but the transfer outcomes were better for blacks than for whites.
Helms (1992) claims that standardized tests are “Eurocentric”, which is “a perceptual set in which European and/ or European American values, customs, traditions and characteristics are used as exclusive standards against which people and events in the world are evaluated and perceived.” In her conclusion, she stated that “Acculturation
and assimilation to White Euro-American culture should enhance one’s performance on currently existing cognitive ability tests” (Helms, 1992: 1098). There just so happens to be evidence for this (along with the the studies referenced above).
Fagan and Holland (2002) showed that when exposure to different kinds of information was required, whites did better than blacks but when it was based on generally available knowledge, there was no difference between the groups. Fagan and Holland (2007) asked whites and blacks to solve problems found on usual IQ-type tests (e.g., standardized tests). Half of the items were solvable on the basis of available information, but the other items were solveable only on the basis of having acquired previous knowledge, which indicated test bais (Fagan and Holland, 2007). They, again, showed that when knowledge is equalized, so are IQ scores. Thus, cultural differences in information acquisition explain IQ scores. “There is no distinction between crassly biased IQ test items and those that appear to be non-biased” (Mensh and Mensh, 1991). This is because each item is chosen because it agrees with the distribution that the test constructors presuppose (Simon, 1997).
How do the neuropsychological studies referenced above along with Fagan and Holland’s studies show that test bias—and, along with it test construction—is built into the test which causes the distribution of the scores observed? Simple: Since the test constructors come from a higher social class, and the items chosen for inclusion on the test are more likely to be found in certain cultural groups than others, it follows that the reason for lower scores was that they were not exposed to the culturally-specific knowledge used on the test (Richardson, 2002; Hilliard, 2012).
The [IQ] tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy. (Mensh and Mensh, 1991)
This is very easily seen with how such tests are constructed. The biases go back to the beginning of standardized testing—the first one being the SAT. The tests’ constructors had an idea of who was or was not ‘intelligent’ and so constructed the tests to show what they already ‘knew.’
…as one delves further … into test construction, one finds a maze of arbitrary steps taken to ensure that the items selected — the surrogates of intelligence — will rank children of different classes in conformity with a mental hierarchy that is presupposed to exist. (Mensh and Mensh, 1991)
Garrison (2009: 5) states that standardized tests “exist to assess social function” and that “Standardized testing—or the theory and practice known as “psychometrics” … is not a form of measurment.” The same way tests were constructed in the 1900s is the same way they are constructed today—with arbitrary items and a presuppossed mental hiearchy which then become baked into the tests by virtue of how they are constructed.
IQ-ists like to say that certain genes are associated with high intelligence (using their GWASes), but what could the argument possibly be that would show that variation in SNPs would cause variation in ‘intelligence’? What would a theory of that look like? How is the hereditarian hypothesis not a just-so story? Such tests were created to justify the hierarchies in society, the tests were constructed to give the results that they get. So, I don’t see how genetic ‘explanations’ are not just-so stories.
1 Blacks and whites are different cultural groups.
2 If (1), then they will have different experiences by virtue of being different cultural groups.is
3 So blacks and whites, being different cultural groups, will score differently on tests of ability, since they are exposed to different knowledge structures due to their different cultures and so, all tests of ability are culture-bound. Knowledge, Culture, Logic, and IQ
Rushton and Jensen (2005) claim that the evidence they review over the past 30 years of IQ testing points to a ‘genetic component’ to the black-white IQ gap, relying on the flawed Minnesota study of twins “reared apart” (Joseph, 2018)—among other methods—to generate heritability estimates and state that “The new evidence reviewed here points to some genetic component in Black–White differences in mean IQ.” The concept of heritability, however, is a flawed metric (Bailey, 1997; Schonemann, 1997; Guo, 2000; Moore, 2002; Rose, 2006; Schneider, 2007; Charney, 2012, 2013; Burt and Simons, 2014; Panofsky, 2014; Joseph et al, 2015; Moore and Shenk, 2016; Panofsky, 2016; Richardson, 2017). That G and E interact means that we cannot tease out “percentages” of nature and nurture’s “contribution” to a “trait.” So, one cannot point to heritability estimates as if they point to a “genetic cause” of the score gap between blacks and whites. Further note that the gap has closed in recent years (Dickens and Flynn, 2006; Smith, 2018).
And now, here is another argument based on the differing experiences that cultural groups experience which then explains IQ score differences (eg Mensh and Mensh, 1991; Manly et al, 1997; Kwate, 2001; Fagan and Holland, 2002, 2007; Cole, 2004; Ryan et al, 2005; Boone et al, 2007; Au, 2008; Hilliard, 2012; Au, 2013).
(1) If children of different class levels have experiences of different kinds with different material; and
(2) if IQ tests draw a disproportionate amount of test items from the higher classes; then
(c) higher class children should have higher scores than lower-class children.