NotPoliticallyCorrect
Please keep comments on topic.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 235 other followers

Follow me on Twitter

Hereditarianism and Religion

2200 words

In its essence the traditional notion of general intelligence may be a secularised version of the Puritan idea of the soul. … perhaps Galtonian intelligence had its roots in a far older kind of religious thinking. (John White, Personal space: The religious origins of intelligence testing)

In chapter 1 of Alas, Poor Darwin: Arguments Against Evolutionary Psychology, Dorothy Nelkin identifies the link between the founder of sociobiology E.O. Wilson’s religious beliefs and the epiphany he described when he learned of evolution. A Christian author then used Sociobiology to explain and understand the origins of our own sinfulness (Williams, 2000). But there is another hereditarian-type research program that has these kinds of assumptions baked-in—IQ.

Philosopher of education John White has looked into the origins of IQ testing and the Puritan religion. The main link between Puritanism and IQ was that of predestination. The first IQ-ists conceptualized IQ—‘g’ or general intelligence—to be innate, predetermined and hereditary. The predetermination line between both IQ and Puritanism is easy to see: To the Puritans, it was predestined whether or not one went to Hell before they even existed as human beings whereas to the IQ-ists, IQ was predestined, due to genes.

John White (2006: 39) in Intelligence, Destiny, and Education notes the parallel between “salvation and success, damnation and failure”:

Can we usefully compare the saved/damned dichotomy with the perceived contribtion of intelligence or the lack of it to success and failure in life, as conventionally understood? One thing telling against this is that intelligence testers claim to identify via IQ scores a continuous gamut of ability from lowest to highest. On the other hand, most of the pioneers in the field were … especially interested in the far ends of this range — in Galton’s phrase ‘the extreme classes, the best and the worst.’ On the other hand there were the ‘gifted’, ‘the eminent’, ‘those who have honourably succeeded in life’, presumably … the most valuable portion of our human stock. On the other, the ‘feeble-minded’, the ‘cretins’, the ‘refuse’ those seeking to avoid ‘the monotony of daily labor’, democracy’s ballast, not always useless but always a potential liability’.

A Puritan-type parallel can be drawn here—the ‘cretins and ‘feeble-minded’ are ‘the damned’ whereas ‘the extreme classes, the best and worst’ were ‘the saved.’ This kind of parallel can still be seen in modern conceptualizations of the debate and current GWASs—certain people have a certain surfeit of genes that influence intellectual attainment. Contrast with the Puritan “Certain people are chosen before they exist to either be damned or saved.” Certain people are chosen, by random mix-ups of genes during conception, to either be successful or not, and this is predetermined by the genes. So, genetic determinism when speaking of IQ is, in a way, just like Puritan predestination—according to Galton, Burt and other IQ-ists in the 1910s-1920s (ever since Goddard brought back the Binet-Simon Scales from France in 1910).

Some Puritans banned the poor from their communities seeing them asdisruptors to Puritan communities.” Stone (2018: 3-4) in An Invitation to Satan: Puritan Culture and the Salem Witch Trials writes:

The range of Puritan belief in salvation usually extended merely to members of their own communities and other Puritans. They viewed outsiders as suspicious, and people who held different beliefs, creeds, or did things differently were considered dangerous or evil. Because Puritans believed the community shared the consequences of right and wrong, often community actions were taken to atone for the misdeed. As such, they did not hesitate to punish or assault people who they deemed to be transgressors against them and against God’s will. The people who found themselves punished were the poor, and women who stood low on the social ladder. These punishments would range from beatings to public humiliation. Certain crimes, however, were viewed as far worse than others and were considered capital crimes, punishable by death.

Could the Puritan treatment of the poor be due to their beliefs of predestination? Puritan John Winthrop stated in his book A Model of Christian Charity thatsome must be rich, some poor, some high and eminent in power and dignity, others mean and in subjection.” This, too, is still around today: IQ sets “upper limits” on one’s “ability ceiling” to achieve X. The poor are those who do not have the ‘right genes’. This is, also, a reason why IQ tests were first introduced in America—to turn away the poor (Gould, 1996; Dolmage, 2018). That one’s ability is predetermined in their genes—that each person has their own ‘ceiling of ability’ that they can reach that is then constrained by their genes is just like the Puritan predestination thesis. But, it is unverifiable and unfalsifiable, so it is not a scientific theory.

To White (2006), the claim that we have this ‘innate capacity’ that is ‘general’ this ‘intelligence’ is wanting. He takes this further, though. In discussing Galton’s and Burt’s claim that there are ‘ability ceilings’—and in discussing a letter he wrote to Burn—White (2006: 16) imagines that we give instruction to all of the twin pairs and that, their scores increase by 15 points. This, then, would have a large effect on the correlation “So it must be an assumption made by the theorist — i.e. Burt — in claiming a correlation of 0.87, that coaching could not successfully improve IQ scores. Burt replied ‘I doubt whether, had we returned a second time, the coaching would have affected our correlations” (White, 2006: 16). Burt seems to be implying that a “ceiling of ability” exists, which he got from his mentor, Galton. White continues:

It would appear that Galton nor Burt have any evidence for their key claim [that ability ceilings exist]. The proposition that, for all of us, there are individually differing ceilings of ability seems to be an assumption behind their position, rather than a conclusion based on telling grounds.

I have discussed elsewhere (White, 1974; 2002a: ch. 5) what could count as evidence for this proposition, and concluded that it is neither verifiable nor falsifiable. The mere fact that a child appears not able to get beyond, say, elementary algebra is not evidence of a ceiling. The failure of this or that variation in teaching approach fares no better, since it is always possible for a tracher to try some different approach to help the learner get over the hurdle. (With some children, so neurologically damaged that they seem incapable of language, it may seem that the point where options run out for the teacher is easier to establish than it is for other children. But the proposition in question is supposed to applu to all of us: we are all said to have our own mental ceiling; and for non-brain-damaged people the existence of a ceiling sems impossible to demonstrate.) It is not falsifiable, since for even the cleverest person in the world, for whom no ceiling has been discovered, it is always possible that it exists somewhere. As an untestable — unverifiable and unfalsifiable — proposition, the claim that we each have a mental ceiling has, if we follow Karl Popper (1963: ch. 1), no role in science. It is like the proposition that God exists or that all historical events are predetermined, both of which are equally untestable. As such, it may play a foundational role, as these two propositions have played, in some ideological belief system of belief, but has no place in empirical science. (White, 2006: 16)

Burt believed that we should use IQ tests to shoe-horn people into what they would be ‘best for’ on the basis of IQ. Indeed, this is one of the main reasons why Binet constructed what would then become the modern IQ test. Binet, influenced by Galton’s (1869) Hereditary Genius, believed that we could identify and help lower-‘ability’ children. Binet envisioned an ‘ideal city’ in which people were pushed to vocations that were based on their ‘IQs.’ Mensh and Mensh (1991: 23) quote Binet on the “universal applications” of his test:

Of what use is a measure of intelligence? Without doubt, one could conceive many possible applications of the process in dreaming of a future where the social sphere would be better organized than ours; where everyone would work according to his known apptitudes in such a way that non particle of psychic force should be lost for society. That would be the ideal city.

So, it seems, Binet wanted to use his test as an early aptitude-type test (like the ones we did in grammar school which ‘showed us’ which vocations we would be ‘good at’ based on a questionnaire). Having people in Binet’s ‘ideal city’ work based on their ‘known aptitudes’ would increase, not decrease, inequality so Binet’s envisioned city is exactly the same as today’s world. Mensh and Mensh (1991: 24) continue:

When Binet asserted that everyone would work to “known” aptitudes, he was saying that the individuals comprising a particular group would work according to the aptitudes that group was “known” to have. When he suggested, for example, that children of lower socioeconomic status are perfectly suited for manual labor, he was simply expressing what elite groups “know,” that is, that they themselves have mental aptitudes, and others have manual ones. It was this elitist belief, this universal rationale for the social status quo, that would be upheld by the universal testing Binet proposed.

White (2006: 42) writes:

Children born with low IQs have been held to have no hope of a professional, well-paid job. If they are capable of joining the workforce at all, they must find their niche as the unskilled workers.

Thus, the similarities between IQ-ist and religious (Puritan) belief comes clear. The parallels between the Puritan concern for salvation and the IQ-ist belief that one’s ‘innate intelligence’ dictated whether or not they would succeed or fail in life (based on their genes); both had thoughts of those lower on the social ladder, their work ethic and morals associated with the reprobate on the one hand and the low IQ people on the other; both groups believed that the family is the ‘mechanism’ by which individuals are ‘saved’ or ‘damned’—presuming salvation is transmitted based one’s family for the Puritans and for the IQ-ists that those with ‘high intelligence’ have children with the same; they both believed that their favored group should be at the top with the best jobs, and best education, while those lower on the social ladder should also get what they accordingly deserve. Galton, Binet, Goddard, Terman, Yerkes, Burt, and others believed that one was endowed with ‘innate general intelligence’ due to genes, according to the current-day IQ-ists who take the same concept.

White drew his parallel between IQ and Puritanism without being aware that one of the first anti-IQ-ists—and American Journalist named Walter Lippman—who also been made in the mid-1920s. (See Mensh and Mensh, 1991 for a discussion of Lippman’s grievances with the IQ-ists). Such a parralel between Puritanism and Galton’s concept of ‘intelligence’ and that of the IQ-ists today. White (2005: 440) notes “that virtually all the major players in the story had Puritan connexions may prove, after all, to be no more than coincidence.” Though, the evidence that White has marshaled in favor of the claim is interesting, as noted many parallels exist. It would be some huge coincidence for there to be all of these parallels without them being causal (from Puritanistic beliefs to hereditarian IQ dogma).

This is similar to what Oyama (1985: 53) notes:

Just as traditonal though placed biological forms in the mind of God, so modern thought finds many ways of endowing the genes with ultimate formative power, a power bestowed by Nature over countless milennia.

Natural selection” plays the role that God did before Darwin, which was even stated by Ernst Mayr (Oyama, 1985: 85).

But this parallel between Puritanism and hereditarianism doesn’t just go back to the early 20th century—it can still be seen today. The assumption that genes contain a type of ‘information’ before activated by the physiological system for its uses still pervades our thought today, even though many others have been at the forefront to change that kind of thinking (Oyama, 1985, 2000; Jablonka and Lamb, 1995, 2005; Moore, 2002, 2016; Noble, 2006, 2011, 2016).

The links between hereditarianism and religion are compelling; eugenic and Puritan beliefs are similar (Durst, 2017). IQ tests have now been identified as having their origins in eugenic beliefs, along with Puritan-like beliefs have being saved/damned based on something that is predetermined, out of your control just like your genetics. The conception of ‘ability ceilings’—using IQ tests—is not verifiable nor is it falsifiable. Hereditarians believe in ‘ability ceilings’ and claim that genes contain a kind of “blueprint” (which is still held today) which predestines one toward certain dispositions/behaviors/actions. Early IQ-ists believed that one is destined for certain types of jobs based on what is ‘known’ about their group. When Binet wrote that, the gene was yet to be conceptualized, but it has stayed with us ever since.

So not only did the concept of “IQ” emerge due to the ‘need’ to ‘identify’ individuals for their certain ‘aptitudes’ that they would be well-suited for in, for instance, Binet’s ideal city, it also arose from eugenic beliefs and religious (Puritan) thinking. This may be why IQ-ists seem so hysterical—so religious—when talking about IQ and the ‘predictions’ it ‘makes’ (see Nash, 1990).

Life in the Time of Corona

2150 words

(Disclaimer: None of this is medical advice.)

Unless you’ve been living under a rock since December 2019, you should have heard the panic that SARS-CoV (which causes COVID-19—coronavirus disease) is causing ever since it emerged in Wuhan, China (Singhal, 2020). This virus spreads really easily—though asymptomatic transmission is thought to be rare, according to the CDC. There is one case report, though, of an infant who showed no signs of COVID-19 but had a high viral load (Kam et al, 2020). In any case, Trump flip-flopped from calling it a ‘hoax’ to taking it seriously, acknowledging the pandemic. “I’ve felt it was a pandemic long before it was called a pandemic“, Trump said. Ah, of course, It must have been just a facade to say it was a hoax. (Pandering to his base?) The ever prescient Trump knows all.

Speaking of prediction, Cheng et al (2007) statedThe presence of a large reservoir of SARS-CoV-like viruses in horseshoe bats, together with the culture of eating exotic mammals in southern China, is a time bomb. The possibility of the reemergence of SARS and other novel viruses from animals or laboratories and therefore the need for preparedness should not be ignored.” Quite the prediction from 13 years ago—implicating southern China’s “culture of eating exotic mammals”, which is more than likely the origin of this current outbreak.

There has been some discussion on whether or not the coronavirus is “as bad” as they’re saying, which has been criticized, for example, for not bringing up the context-dependency of the numbers. The number of cases in the US, though, as of Friday, March 20, 2020, was at 15,219 with 201 deaths. The number of cases keeps increasing daily. As of 3/22/2020, America has had 26,909 cases with 349 deaths while 178 recovered. Ninety-seven percent are in mild condition right now while three percent are in serious condition.

The current recommendationssocial distancing, self-quarantining—are what we are doing to fight the virus, but I think we are going to need more drastic measures. Social distancing and self-quarantining will help to slow the spread of the virus, but the virus is still obviously spreading.

All of the talk about what to call it—Wuhan virus, Chinese virus, China virus, coronavirus—is irrelevant. Call it whatever you’d like, just make sure that whomever you’re communicating with knows what you’re talking about. (And, if you want to ensure they do, just call it “coronavirus” as that seems to be the name that has stuck these past few months.) I understand the want to identify where it began and spread from, but of course, others will use it for racial reasons.

The past few days there has been a lot of attention focused on hydroxychloroquine (HCQ) and azithromycin. These are anti-malarial drugs; a trial was done to see if it would have any effect on COVID-19 (Liu et al, 2020).

For HCQ, there is an “expert consensus” on HCQ treatment and COVID-19, and they state:

It recommended chloroquine phosphate tablet, 500mg twice per day for 10 days for patients diagnosed as mild, moderate and severe cases of novel coronavirus pneumonia and without contraindications to chloroquine.

Chloroquine has been shown to reduce spread and infection of coronaviruses (Vincent et al, 2005; Savarino et al, 2006; Wang et al, 2015; Wang et al, 2020). Wang et al (2015) note that:

HCQ and chloroquine are cellular autophagy modulators that interfere with the pH-dependent steps of endosome-mediated viral entry and late stages of replication of enveloped viruses such as retroviruses, flaviviruses, and coronaviruses (Savarino and others ; Vincent and others ).

I don’t know what to make of such results, I am awaiting larger trials on the matter. There is some hope in using anti-malarial drugs in the hopes of curbing the disease.

The Chinese knew that this virus was similar to other SARS strains, their scientists were ordered to stop testing on samples and to destroy the evidence. (See here for a timeline of the case.) The scary thing is that this virus has symptoms similar to the common cold that we get every winter so some may brush it off as ‘just the cold.’ I came down with a cold at the end of January and I was out of commission for the week. Thankfully, it was not COVID-19.

Italy and China had a strong trade relationship, which seems to have cost Italy. Italy has one of the oldest populations in the world. Ninety-nine percent of corona deaths in Italy, though, had other health problems, such as being obese, having hypertension, previous heart problems, etc. Italy began locking down cities as early as two weeks ago, though they have reported a staggering 4,825 deaths. This, though, is to be expected when a quarter of the country is aged 65 and older with multiple comorbidities. So if it is that bad in Italy with a smaller population, what does that mean for the US in the coming weeks?

New York and New Jersey banned gatherings of more than 50 people, dining out, gyms, etc in an effort to curb the transmission of the virus. Then, Friday night at midnight (3/21/2020) only essential businesses were allowed to stay open—essentials include healthcare, infrastructure, food (no dining-in, take-out or delivery only), grocery stores, mail, laundromats, law enforcement, etc. In NJ, all businesses were ordered to close down except things like grocery stores, banks, pet stores, convenience stores, etc. This affected me (gyms closed) and so I cannot work. I preempted this a few weeks ago and found a job in logistics, but I got laid off on Friday due to the shut-downs of nonessential businesses (the shut-downs decreased my work). Now, I’m thinking about hunkering down until at least June. Due to what we know about the social determinants of health (Marmot, 2005; Cockerham, 2007; Barr, 2019) we can expect what is associated with low class (poor health, stress, etc) to increase as well.

This is only going to get worse in the coming weeks. I do see a decreased number of people out on the street, and I am glad that states are taking measures to curb the transmission of the virus, but I still see people not really taking it seriously. From the ads on the radio informing us about what is going on around the country in terms of death rate and transmission rate, they are strongly suggesting for people to stay home and to avoid public transportation. Obviously, in places that are enclosed and many people walk in and out in a timely manner, that is a great place for the virus to spread. ….what if we’re doing what the virus ‘wants’? Don’t worry, the evo-psychos are here to tell us just-so stories.

By this account, COVID-19 is turning out to be a remarkably intelligent evolutionary adversary. By exploiting vulnerabilities in human psychology selectively bred by its pathogen ancestors, it has already shut down many of our schools, crashed our stock market, increased social conflict and xenophobia, reshuffled our migration patterns, and is working to contain us in homogenous spaces where it can keep spreading. We should pause to remark that COVID-19 is extraordinarily successful epidemiologically, precisely because it is not extremely lethal. With its mortality rate of 90%, for example, Ebola is a rather stupid virus: It kills its host — and itself — too quickly to spread far enough to reshape other species’ life-ways to cater to its needs. (The Coronavirus Is Much Worse Than You Think)

Ah, the non-lethality of COVID-19 is to its benefit—it can spread more, it is an “intelligent evolutionary adversary” but it is causing a “moral panic” as well. The damage to our psyche, apparently, is worse than what it could do to our lungs. And while I do agree that this could damage our collective psyches, we don’t need to tell just-so stories about it.

When we come out of this pandemic, I can see us being very cautious as we go back to normal life (in places affected, people are still going out where I live but not as much). Then, hundreds of years later, Evolutionary Psychologists notice how averse people are to go outside. “Why are people so introverted? Why do people avoid others?” They ask. “Why are those who wear masks more attractive than those who don’t wear masks?” They then discover the pandemic of the 2020s which ravaged the world. “Ah! Critics won’t be able to say ‘just-so stories’ now! We know the preceding event—we have a record of it happening!” And so, the evo-psychos celebrate.

In all seriousness, if people do take this seriously, there may be some social/cultural customs changes, including how we greet people.

Cao et al (2020) conclude: “The East Asian populations have much higher AFs [allele frequencies] in the eQTL variants associated with higher ACE2 expression in tissues (Fig. 1c), which may suggest different susceptibility or response to 2019-nCoV/SARS-CoV-2 from different populations under the similar conditions.” Asian men smoke more cigarettes than Asian women (Ma et al, 2002, 2004; Chae, Gavin, and Takeuchi, 2006; Tsai et al, 2008). In your lungs you have what is called “cilia fibers’ and these fibers move debris and microbes out while they also protect the bronchus and trap microorganisms. COVID-19 attacks these same cilia fibers that degrade when one smokes. Therefore, the fact that East Asian populations have higher allele frequencies in ACE2 expression tissues along with higher rates of smoking may be why Asian men seem to be affected more than Asian women. In any case, smokers of any race need to exercise caution.

What if after the pandemic is over life does not go back to normal? What if life during the pandemic becomes the ‘new normal’ when the pandemic is over because everyone is paranoid about contracting the virus again? For introverts, like myself, it’s easy to lock-in. I have hundreds of books to choose from to read, so if I do choose to lock in for 2 months (which I am thinking about), then I won’t really be bored. But my thing is this: what’s the point of locking in when everyone else isn’t, the virus still spreads and when you finally go out the pandemic is still going on? The point of quarantining is understandable—but if everyone doesn’t do it, will it really work? Libertarians be damned, we need the government to step in and do these kinds of things right now. It’s not about the individual, but the public as a whole.

On the other hand, it is thought-provoking to think about the fact that the government is ramping up the drama in the news to see how far they can go with social control. What a perfect way to see how far the public would go if they got “suggestions” from the government. Just like the government is “suggesting” we be inside at 8 pm to mitigate viral transmission, for example, it’s just to see what we would accept and how far they can go until they make it mandatory. It is interesting to think about how all of the toilet paper, hand sanitizer, hand soap, etc are being sold out everywhere.

People in my generation have 9/11 to look back to as the “That’s when the world changed” time. Well, kids alive today (around 7-15 years old) are experiencing their “9/11”, as that’s when the world changed for them. But this coronavirus pandemic is not on a country level—it is on the world level. The whole WORLD is affected. So since our Gregorian calendar is based off the birth of Jesus, I propose the following: change 1-December 2019 AD/CE to BC (before Corona) and anything after December 2019 to AC (after Corona).

I hope that, looking back on the current goings-on now that we are not talking about high death tolls and that we can get this under control. The only course of action (for now) is to attempt to stop the transmission of the virus—which is to stop its transmission from human to human. COVID-19 can be said to largely be a social disease since that is how it is most likely to be transmitted, which is why social distancing is so important. Being social is how the virus spreads, so to stop spreading the virus we need to be anti-social.

If we do not heed these warnings, then we will permanently be living in the Time of Corona. Coronavirus will be dictating what we do and when we do it. No one will want to get sick but no one will also want to take the steps needed in order to eradicate the threat. This thing is just getting started, by the end of the month into the first few weeks of April it is only going to get worse. I hope you all are prepared (have food [meat], water, soap, etc) because we’re in for a hell of a ride. With many businesses closing down in an effort to curb the transmission of COVID-19, many people will be out of jobs—many low-income people.

Charles Murray’s Philosophically Nonexistent Defense of Race in “Human Diversity”

2250 words

Charles Murray published his Human Diversity: The Biology of Gender, Race, and Class on 1/28/2020. I have an ongoing thread on Twitter discussing it.

Murray talks of an “orthodoxy” that denies the biology of gender, race, and class. This orthodoxy, Murray says, are social constructivists. Murray is here to set the record straight. I will discuss some of Murray’s other arguments in his book, but for now I will focus on the section on race.

Murray, it seems, has no philosophical grounding for his belief that the clusters identified in these genomic runs are races—and this is clear with his assumptions that groups that appear in these analyses are races. But this assumption is unfounded and Murray’s assumption that the clusters are races without any sound justification for his belief actually undermines his claim that races exist. That is one thing that really jumped out at me as I was reading this section of the book. Murray discusses what geneticists say, but he does not discuss what any philosophers of race say. And that is to his downfall.

Murray discusses the program STRUCTURE, in which geneticists input the number of clusters they want and, when DNA is analyzed (see also Hardimon, 2017: chapter 4). Rosenberg et al (2002) sampled 1056 individuals from 52 different populations using 377 microsatellites. They defined the populations by culture, geography, and language, not skin color or race. When K was set to 5, the clusters represented folk concepts of race, corresponding to the Americas, Europe, East Asia, Oceania, and Africa. (See Minimalist Races Exist and are Biologically Real.) Yes, the number of clusters that come out of STRUCTURE are predetermined by the researchers, but the clusters “are genetically structured … which is to say, meaningfully demarcated solely on the basis of genetic markers” (Hardimon, 2017: 88).

Races as clusters

Murray then discusses Li et al, who set K to 7 and North Africa and the Middle East were new clusters. Murray then provides a graph from Li et al:


So, Murray’s argument seems to be “(1) If clusters that correspond to concepts of race setting K to 5-7 appear in STRUCTURE and cluster analyses, then (2) race exists. (1). Therefore (2).” Murray is missing a few things here, namely conditions (see below) that would place the clusters into the racial categories. His assumption that the clusters are races—although (partly) true—is not bound by any sound reasoning, as can be seen by his partitioning Middle Easterners and North Africans as separate races. Rosenberg et al (2002) showed the Kalash in K=6, are they a race too?

No, they are not. Just because STRUCTURE identifies a population as genetically distinct, it does not entail that the population in question is a race because they do not fit the criteria for racehood. The fact that the clusters correspond to major areas means that the clusters represent continental-level minimalist races so races, therefore, exist (Hardimon, 2017: 85-86). But to be counted as a continental-level minimalist race, the group must fit the following conditions (Hardimon, 2017: 31):

(C1) … a group is distinguished from other groups of human beings by patterns of visible physical features
(C2) [the] members are linked by a common ancestry peculiar to members of that group, and
(C3) [they] originate from a distinctive geographic location

[…]

…what it is for a group to be a race is not defined in terms of what it is for an individual to be a member of a race. What it means to be an individual member of a minimalist race is defined in terms of what it is for a group to be a race.

Murray (paraphrased): “Cluster analyses/STRUCTURE spit out these continental microsatellite divisions which correspond to commonsense notions of race.” What is Murray’s logic for assuming that clusters are races? It seems that there is no logic behind it—just “commonsense.” (See also Fish, below.) Due to not finding any arguments for accepting X number of clusters as the races Murray wants, I can only assume that Murray just chose which one agreed with his notions and use for his book.  (If I am in error, then if there is an argument in the book then maybe someone can quote it.) What kind of justification is that?

Compared to Hardimon’s argument and definition. Homo sapiens is:

a subdivision of Homo sapiens—a group of populations that exhibits a distinctive pattern of genetically transmitted phenotypic characters that corresponds to the group’s geographic ancestry and belongs to a biological line of descent initiated by a geographically separated and reproductively isolated founding population. (Hardimon, 2017: 99)

[…]

Step 1. Recognize that there are differences in patterns of visible physical features of human beings that correspond to their differences in geographic ancestry.

Step 2. Observe that these patterns are exhibited by groups (that is, real existing groups).

Step 3. Note that the groups that exhibit these patterns of visible physical features correspond to differences in geographical ancestry satisfy the conditions of the minimalist concept of race.

Step 4. Infer that minimalist race exists. (Hardimon, 2017: 69)

While Murray is right that the clusters that correspond to the folk races appear in K = 5, you can clearly see that Murray assumes that ALL clusters would then be races and this is where the philosophical emptiness of Murray’s account comes in. Murray has no criteria for his belief that the clusters are races, commonsense is not good enough.

Philosophical emptiness

Murray then lambasts the orthodoxy for claiming that race is a social construct.

Advocates of “race is a social construct” have raised a host of methodological and philosophical issues with the cluster analyses. None of the critical articles has published a cluster analysis that does not show the kind of results I’ve shown.

Murray does not, however, discuss a more critical article of Rosenberg et al (2002)Mills (2017)Are Clusters Races? A Discussion of the Rhetorical Appropriation of Rosenberg et al’s “Genetic Structure of Human Populations.” Mills (2017) discusses the views of Neven Sesardic (2010)—philosopher—and Nicholas Wade—science journalist and author of A Troublesome Inheritance (Wade, 2014). Both Wade and Seasardic are what Kaplan and Winther (2014) term “biological racial realists” whereas Rosenberg et al (2002), Spencer (2014), and Hardimon (2017) are bio-genomic/cluster realists. Mills (2017) discusses the “misappropriation” of the bio-genomic cluster concept due to the “structuring of figures [and] particular phrasings” found in Rosenberg et al (2002). Wade and Seasardic shifted from bio-genomic cluster realism to their own hereditarian stance (biological racial realism, Kaplan and Winther, 2014). While this is not a blow to the positions of Hardimon and Spencer, this is a blow to Murray et al’s conception of “race.”

Murray (2020: 144)—rightly—disavows the concept of folk races but wrongly accepting the claim that we dispense with the term “race”:

The orthodoxy is also right in wanting to discard the word race. It’s not just the politically correct who believe that. For example, I have found nothing in the genetics technical literature during the last few decades that uses race except within quotation marks. The reasons are legitimate, not political, and they are both historical and scientific.

Historically, it is incontestably true that the word race has been freighted with cultural baggage that has nothing to do with biological differences. The word carries with it the legacy of nineteenth-century scientific racism combined with Europe’s colonialism and America’s history of slavery and its aftermath.

[…]

The combination of historical and scientific reasons makes a compelling case that the word race has outlived its usefulness when discussing genetics. That’s why I adopt contemporary practice in the technical literature, which uses ancestral population or simply population instead of race or ethnicity

[Murray also writes on pg 166]

The material here does not support the existence of the classically defined races.

(Nevermind the fact that Murray’s and Herrnstein’s The Bell Curve was highly responsible for bringing “scientific racism” into the 21st century—despite protestations to the contrary that his work isn’t “scientifically racist.”)

In any case, we do not need to dispense with the term race. We only need to deflate the term (Hardimon, 2017; see also Spencer, 2014). Rejecting claims from those termed biological racial realists by Kaplan and Winther (2014), both Hardimon (2017) and Spencer (2014; 2019) deflate the concept of race—that is, their concepts only discuss what we can see, not what we can’t. Their concepts are deflationist in that they take the physical differences from the racialist concept (and reject the psychological assumptions). Murray, in fact, is giving into this “orthodoxy” when he says that we should stop using the term “race.” It’s funny, Murray cites Lewontin (an eliminativist about race) but advocates eliminativism of the word but still keeping the underlying “guts” of the concept, if you will.

We should only take the concept of “race” out of our vocabulary if, and only if, our concept does not refer. So for us to take “race” out of our vocabulary it would have to not refer to any thing. But “race” does refer—to proper names for a set of human population groups and to social groups, too. So why should we get rid of the term? There is absolutely no reason to do so. But we should be eliminativist about the racialist concept of race—which needs to exist if Murray’s concept of race holds.

There is, contra Murray, material that corresponds to the “classically defined races.” This can be seen with Murra’s admission that he read the “genetics technical literature”. He didn’t say that he read any philosophy of race on the matter, and it clearly shows.

To quote Hardimon (2017: 97):

Deflationary realism provides a worked-out alternative to racialism—it is a theory that represents race as a genetically grounded, relatively superficial biological reality that is not normatively important in itself. Deflationary realism makes it possible to rethink race. It offers the promise of freeing ourselves, if only imperfectly, from the racialist background conception of race.

Spencer (2014) states that the population clusters found by Rosenberg et al’s (2002) K = 5 run are referents of racial terms used by the US Census. “Race terms” to Spencer (2014: 1025) are “a rigidly designating proper name for a biologically real entity …Spencer’s (2019b) position is now “radically pluralist.” Spencer (2019a) states that the set of races in OMB race talk (Office of Management and Budget) is one of many forms “race” can take when talking about race in the US; the set of races in OMB race talk is the set of continental human populations; and the continental set of human populations is biologically real. So “race” should be understood as proper names—we should only care if our terms refer or not.

Murray’s philosophy of race is philosophically empty—Murray just uses “commensense” to claim that the clusters found are races, which is clear with his claim that ME/NA people constitute two more races. This is almost better than Rushton’s three-race model but not by much. In fact, Murray’s defense of race seems to be almost just like Jensen’s (1998: 425) definition, which Fish (2002: 6) critiqued:

This is an example of the kind of ethnocentric operational definition described earlier. A fair translation is, “As an American, I know that blacks and whites are races, so even though I can’t find any way of making sense of the biological facts, I’ll assign people to my cultural categories, do my statistical tests, and explain the differences in biological terms.” In essence, the process involves a kind of reasoning by converse. Instead of arguing, “If races exist there are genetic differences between them,” the argument is “Genetic differences between groups exist, therefore the groups are races.”

So, even two decades later, hereditarians are STILL just assuming that race exists WITHOUT arguments and definitions/theories of race. Rushton (1997) did not define “race”, and also just assumed the existence of his three races—Caucasians, Mongoloids, and Negroids; Levin (1997), too, just assumes their existence (Fish, 2002: 5). Lynn (2006: 11) also uses a similar argument to Jensen (1998: 425). Since the concept of race is so important to the hereditarian research paradigm, why have they not operationalized a definition and rely on just assuming that race exists without argument? Murray can now join the list of his colleagues who also assume the existence of race sans definition/theory.

Conclusion

Hardimon’s and Spencer’s concepts get around Fish’s (2002: 6) objection—but Murray’s doesn’t. Murray simply claims that the clusters are races without really thinking about it and providing justification for his claim. On the other hand, philosophers of race (Hardimon, 2017; Spencer, 2014; 2019a, b) have provided sound justification for the belief in race. Murray is not fair to the social constructivist position (great accounts can be found in Zack (2002), Hardimon (2017), Haslanger (2000)). Murray seems to be one of those “Social constructivists say race doesn’t exist!” people, but this is false: Social constructs are real and the social can does have potent biological effects. Social constructivists are realists about race (Spencer, 2012; Kaplan and Winther, 2014; Hardimon, 2017), contra Helmuth Nyborg.

Murray (2020: 17) asks “Why me? I am neither a geneticist nor a neuroscientist. What business do I have writing this book?” If you are reading this book for a fair—philosophical—treatment for race, look to actual philosophers of race and don’t look to Murray et al who do not, as shown, have a definition of race and just assume its existence. Spencer’s Blumenbachian Partitions/Hardimon’s minimalist races are how we should understand race in American society, not philosophically empty accounts.

Murray is right—race exists. Murray is also wrong—his kinds of races do not exist. Murray is right, but he doesn’t give an argument for his belief. His “orthodoxy” is also right about race—since we should accept pluralism about race then there are many different ways of looking at race, what it is, and its influence on society and how society influences it. I would rather be wrong and have an argument for my belief then be right and appeal to “commonsense” without an argument.

Just-so Stories: Mass Killings

2000 words

Mass shootings occur about every 12.5 days (Meindl and Ivy, 2017) and so, figuring out why this is the case is of utmost importance. There are, of course, complex and multi-factorial reasons why people turn to mass killing, with popular fixes being to change the environment and attempt to identify at-risk individuals before they carry out such heinous acts.

Just-so stories take many forms—why men have beards, human fear of snakes and spiders, why men have bald heads, why humans have big brains, why certain genes are in different populations in different frequencies, etc. The trait/genes that influence the trait are said to be fitness-enhancing and therefore selected-for—they become “naturally selected” (see Fodor and Piattelli-Palmarini, 2010, 2011) and fixated in that species. Mass shootings are becoming more frequent and deadlier in America; is there any evolutionary rationale behind this? Don’t worry, the just-so storytellers are here to tell us why these sorts of actions are and have been prevalent in society.

The end result is a highly provocative interpretation of combining theories of human nature and evolutionary psychology. Additionally, community development and connectedness are described as evolved behaviors that help provide opportunities for individuals to engage and support each other in a conflicted society. In sum, this manuscript helps piece together centuries old [sic] theories describing human nature with current views addressing natural selection and adaptive behaviors that helped shape the good that we know in each person as well as the potential destruction that we seem to tragically be witnessing with increasing frequency. At the time of this manuscript publication yet another mass campus shooting had occurred at Umpqua Community College (near Roseburg, Orgeon). (Hoffman, 2015: 3-4, Philosophical Foundations of Evolutionary Psychology)

It seems that Hoffman (a psychology professor at Metropolitan State University) is implying that such actions like “mass campus shootings” are a part of “the potential destruction that we seem to tragically be witnessing with increasing frequency.” Hoffman (2015: 175) speaks of “genetic skills” and that just “because an individual has the genetic skills to be an athlete, artist, or auto-mechanic does not mean that ipso facto it will happen—what actually defines the outcomes of a specific human behavior is a very complex social and environmental process.” So, at least, Hoffman seems to understand (and endorse) the GxE/DST view.

There are more formal presentations that such actions are “based on an evolutionary compulsion to take action against a perceived threat to their status as males, which may pose a serious threat to their viability as mates and to their ultimate survival” (Muscoreil, 2015). (Let’s hope they stayed an undergrad.)

Muscoreil (2015) claims that such are due to status-seeking—to take action against other males that they perceive to be a threat to their social status and reproductive success. Of course, killing off the competition would have that individual’s genes spread through the population more, therefore increasing the frequency of those traits in the population if they happen to have more children (so the just-so story goes). Though, the storytellers are hopeful: Muscoreil (2015) proposes to be ready to work toward “peace and healing” whereas Hoffman (2015: 176) proposes that we should work on cooperation, which was evolutionarily adaptive, and so “communities not only have the capacity but also more importantly an obligation to create specific environments that stimulates and nurture cooperative relationships, such as the development of community service activities and civic engagement opportunities.” So it seems that these authors aren’t so doom-and-gloom—through community outreach, we can come together and attempt to decrease these kinds of crimes that have been on the rise since 1999.

There is a paraphilia called “hybristophilia” in which a woman gets sexually aroused at the thought of being cheated on, or even the thought of her partner committing heinous crimes such as rape and murder. Some women are even attracted to serial killers, and they tend to be in their 40s and 50s—through the killer, it is said, the woman gains a sense of status in her head. Two kinds of women who fall for serial killers exist: those who think they can “change” the killer and those who are attracted through news headlines on the killer’s actions. While others say lonely women who want attention will write serial killers since they are more likely to write back. This is, clearly, pointing to an innate evolutionary drive for women to be attracted to the killer, so they can feel more protected—even if they are not physically with them.

Of course, if there were no guns there would still (theoretically) be mass killings, as anything and everything can be used as a weapon to cause harm to another (which is why this is about mass killings and not mass murders). So, evolutionary psychologists note that a certain action is still prevalent (the fact that autogenic massacre exists) and attempt to explain it in a way only they can—through the tried and tested just-so story method.

Klinesmith et al (2006) showed that men who interacted with a gun showed subsequent increases in testosterone levels compared to those who tinkered with the board game Mouse Trap. Those who had access to the gun showed greater increases in testosterone and thus added more hot sauce to the water. They conclude that “exposure to guns may increase later interpersonal aggression, but further demonstrates that, at least for males, it does so in part by increasing testosterone levels” (Klinesmith et al, 2006: 570). And so, due to this, guns may increase aggressive behavior due to an increase in testosterone. This study has the usual pitfalls—small sample (n=30), college-age (younger means more aggressive, on average) and so cannot be generalized. But the idea is out there: Holding a gun has a man feel more powerful and dominant, and so, their testosterone levels increase BUT! the testosterone increase would not be driving the cause. It has even been said that mass shooters are “low dominance losers”. Lack of attention would lead to decreased social status which means fewer women would be willing to talk with the guy which makes the guy think that his access to women is decreasing due to his lack of social status and, when he gets access to a weapon, his testosterone increases as he can then give in to his evolutionary compulsions and therefore increase his virality and access to mates.

Elliot Rodger is one of these types. Killing six people because he was shunned and had no social life—he wanted to punish the women who rejected him and the men who he envied. Being inter-racial himself, he described his hatred for inter-racial couples and couples in general (he himself was half white and half Asian), the fact that he could never get a girlfriend, and the conflicts that occurred in his family. Of course, all of his life experiences coalesced into the actions he decided to undertake that day—and to the evolutionary psychologist, it is all understandable through an evolutionary lens. He could not get women and was jealous of the men who could get women, so why not attempt to take some of them out and get his “retributive justice” he so yearned for? Evolutionary psychology explains his and similar actions. (VanGeem, 2009 espouses similar ideas.)

These ideas on evolutionary psychology and mass killings can even be extended to terrorism and mass killings—as I myself (stupidly) have written on (see Rushton, 2005). He uses his (refuted) genetic similarity theory (GST; an extension of kin selection and Dawkins’ selfish gene theory) to show why suicide bombers are motivated to kill.

These political factors play an indispensable role but from an evolutionary perspective aspiring to universality, people have evolved a ‘cognitive module’ for altruistic self-sacrifice that benefits their gene pool. In an ultimate rather than proximate sense, suicide bombing can be viewed as a strategy to increase inclusive fitness. (Rushton, 2005: 502)

Genes … typically only whisper their wishes rather than shout” (Rushton, 2005: 502). Note the Dawkins-like wording. Rushton, wisely, cautions in his conclusion that his genetic similarity theory is only one of many reasons why things like this occur and that causation is complex and multi-factorial—right, nice cover. To Rushton, the suicide bomber is taking an action in order to ensure that those more closely relate to them (their family and their ethnic group as a whole) survive and propagate more of their genes, increasing the selfishness and ethnocentrism of that ethnic group. Note how Rushton, despite his protestations to the contrary, is trying to ‘rationalize’ racism and ethnocentric behavior as being ‘in the genes’ with the selfish genes having the ‘vehicle’ behave more selfishly in order to increase the frequencies of the copies of itself that are found in co-ethnics. (See Noble, 2011 for a refutation of Dawkins’ theory.) Ethnic nationalism, genocide, and genocide are the “dark side to altruism”, states Rushton (2005: 504), and this altruistic behavior, in principle, could show why Arabs commit their suicide bombings and similar attacks.

Jetter and Walker (2018) show that “news coverage is suggested to cause approximately three mass shootings in the following week, which would explain 58 percent of all mass shootings in our sample” looking at ABC News Tonight coverage between the time period of Januray 1, 2013 to June 23, 2016. Others have also suggested that such a “media contagion” effect exists in regard to mass shootings (Towers et al, 2015; Johnston and Joy, 2016; Meindl and Ivy, 2017; Lee, 2018; Pescara-Kovach et al, 2019). The idea of such a “media contagion” makes sense: If one is already harboring ideas of attempting a mass killing, seeing them occur in their own country by people around their own ages may have them think “I can do that, too.” And so, this could be one of the reasons for the increase in such attacks—the sensationalist media constantly covering the events and blasting the name of the perpetrator all over the airwaves.

Though, contrary to popular belief, the race of a mass shooter is not more likely to be white—he is more likely to be Asian. Between 1982 and 2013, out of the last 20 mass killings of the time period, 45 percent (9) were comitted by non-whites. Asians, being 6 percent of the US population, were 15 percent of the killers within the last 31 years. So, regarding population size, Asians commit the most mass shootings, not whites. (See also Mass Shootings by Race; they have up-to-date numbers.) Chen et al (2015) showed that:

being exposed to a Korean American rampage shooter in the media and perceiving race as a cause for this violence was positively associated with negative beliefs and social distance toward Korean American men. Whereas prompting White-respondents to subtype the Korean-exemplar helped White-respondents adjust their negative beliefs about Korean American men according to their attribution of the shooting to mental illness, it did not eliminate the effect of racial attribution on negative beliefs and social distance

Mass shooters who were Asian or another non-white minority got a lot more attention and receieved longer stories than those of white shooters. “While the two most covered shootings are perpetrated by whites (Sandy Hook and the 2011 shooting of Congresswoman Gabrielle Giffords in Tucson, Arizona), both an Asian and Middle Eastern shooter garnered considerable attention in The Times” (Schildkraut, Elsass, and Meredith, 2016).

Although Hoffman (2015) and Muscoreil (2015) state that we should look to the community to ensure that individuals are not socially isolated so that these kinds of things may be prevented, there is still no way to predict who a mass shooter would be. Others propose that, due to the high increase of school shootings, steps should be taken to evaluate the mental health of at-risk students (Paoloni, 2015; see alsoKnoll and Annas, 2016.) and attempt to stop these kinds of things before they happen. Mental illness cannot predict mass shootings (Leshner, 2019), but “evolutionary psychologists” cannot either. We did not need the just-so storytelling of Rushton, Hoffman, and Muscoreil to explain why mass killers still exist—the solutions to such killings put forth by Hoffman and Muscoreil are fine, but we did not need just-so story ‘reasoning’ to come to that conclusion.

Race, Test Bias, and ‘IQ Measurement’

1800 words

The history of standardized testing—including IQ testing—has a contentious history. What causes score distributions between groups of people? I stated at least four reasons why there is a test gap:

(1) Differences in genes cause differences in IQ scores;

(2) Differences in environment cause differences in IQ scores;

(3) A combination of genes and environment cause differences in IQ scores; and

(4) Differences in IQ scores are built into the test based on the test constructors’ prior biases.

I hold to (4) since, as I have noted, the hereditarian-environmentalist debate is frivolous. There is, as I have been saying for years now, no agreed-upon definition of ‘intelligence’, since there are such disparate answers from the ‘experts’ (Lanz, 2000; Richardson, 2002).

For the lack of such a definition only reflects the fact that there is no worked-out theory of intelligence. Having a successful definition of intelligence without a corresponding theory would be like having a building without foundations. This lack of theory is also responsible for the lack of some principled regimentation of the very many uses the word ‘intelligence’ and its cognates are put to. Tao many questions concerning intelligence are still open, too many answers controversial. Consider a few examples of rather basic questions: Does ‘intelligence’ name some entity which underlies and explains certain classes of performances1, or is the word ‘intelligence’ only sort of a shorthand-description for ‘being good at a couple of tasks or tests’ (typically those used in IQ tests)? In other words: Is ‘intelligence’ primarily a descriptive or also an explanatorily useful term? Is there really something like intelligence or are there only different individual abilities (compare Deese 1993)? Or should we turn our backs on the noun ‘intelligence’ and focus on the adverb ‘intelligently’, used to characterize certain classes of behaviors? (Lanz, 2000: 20)

Nash (1990: 133-4) writes:

Always since there are just a series of tasks of one sort or another on which performance can be ranked and correlated with other performances. Some performances are defined as ‘cognitive performances’ and other performances as ‘attainment performances’ on essentially arbitrary, common sense grounds. Then, since ‘cognitive performances’ require ‘ability’ they are said to measure that ‘ability’. And, obviously, the more ‘cognitive ability’ an individual posesses the more that individual can acheive. These procedures can provide no evidence that IQ is or can be measured, and it is rather besides the point to look for any, since that IQ is a metric property is a fundamental assumption of IQ theory. It is imposible that any ‘evidence’ could be produced by such procedures. A standardised test score (whether on tests designated as IQ or attainment tests) obtained by an individual indicates the relative standing of that individual. A score lies within the top ten perent or bottom half, or whatever, of those  gained by the standardisation group. None of this demonstrates measurement of any property. People may be rank ordered by their telephone numbers but that would not indicate measurement of anything. IQ theory must demonstrate not that it has ranked people according to some performance (that requires no demonstration) but that they are ranked according to some real property revealed by that performance. If the test is an IQ test the property is IQ — by definition — and there can in consequence be no evidence dependent on measurement procedures for hypothesising its existence. The question is one of theory and meaning rather than one of technique. It is impossible to provide a satisfactory, that is non-circular, definition of the supposed ‘general cognitive ability’ IQ tests attempt to measure and without that definition IQ theory fails to meet the minimal conditions of measurement.

These is similar to Mary Midgley’s critique of ‘intelligence’ in her last book before her death What Is Philosophy For? (Midgley, 2018). The ‘definitions’ of ‘intelligence’ and, along with it, its ‘measurement’ have never been satisfactory. Haier (2016: 24) refers to Gottfredson’s ‘definition’ of ‘intelligence, stating that ‘intelligence’ is a ‘general mental ability.’ But if that is the case, that it is a ‘general mental ability’ (g) then ‘intelligence’ does not exist because ‘g’ does not exist as a property in the brain. Lanz’s (2000) critique is also like Howe’s (1988; 1997) that ‘intelligence’ is a descriptive, not explanatory, term.

Now that the concept of ‘intelligence’ has been covered, let’s turn to race and test bias.


Test items are biased when they have different psychological meanings across cultures (He and van de Vijver 2012: 7). If they have different meanings across cultures, then the tests will not reflect the same ‘ability’ between cultures. Being exposed to the knowledge—and correct usage of it—on a test is imperative for performance. For if one is not exposed to the content on the test, how are they expected to do well if they do not know the content? Indeed, there is much evidence that minority groups are not acculturated to the items on the test (Manly et al, 1997; Ryan et al, 2005; Boone et al, 2007). This is what IQ tests measure: acculturation to the the tests’ constructors, school cirriculum and school teachers—aspects of white, middle-class culture (Richardson, 1998). Ryan et al (2005) found that reading and and educational level, not race or ethnicity, was related to worse performance on psychological tests.

Serpell et al (2006) took 149 white and black fourth-graders and randomly assigned them to ethnically homogeneous groups of three, working on a motion task on a computer. Both blacks and whites learned equally well, but the transfer outcomes were better for blacks than for whites.

Helms (1992) claims that standardized tests are “Eurocentric”, which is “a perceptual set in which European and/ or European American values, customs, traditions and characteristics are used as exclusive standards against which people and events in the world are evaluated and perceived.” In her conclusion, she stated that “Acculturation
and assimilation to White Euro-American culture should enhance one’s performance on currently existing cognitive ability tests” (Helms, 1992: 1098). There just so happens to be evidence for this (along with the the studies referenced above).

Fagan and Holland (2002) showed that when exposure to different kinds of information was required, whites did better than blacks but when it was based on generally available knowledge, there was no difference between the groups. Fagan and Holland (2007) asked whites and blacks to solve problems found on usual IQ-type tests (e.g., standardized tests). Half of the items were solvable on the basis of available information, but the other items were solveable only on the basis of having acquired previous knowledge, which indicated test bais (Fagan and Holland, 2007). They, again, showed that when knowledge is equalized, so are IQ scores. Thus, cultural differences in information acquisition explain IQ scores. “There is no distinction between crassly biased IQ test items and those that appear to be non-biased” (Mensh and Mensh, 1991). This is because each item is chosen because it agrees with the distribution that the test constructors presuppose (Simon, 1997).

How do the neuropsychological studies referenced above along with Fagan and Holland’s studies show that test bias—and, along with it test construction—is built into the test which causes the distribution of the scores observed? Simple: Since the test constructors come from a higher social class, and the items chosen for inclusion on the test are more likely to be found in certain cultural groups than others, it follows that the reason for lower scores was that they were not exposed to the culturally-specific knowledge used on the test (Richardson, 2002; Hilliard, 2012).


The [IQ] tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy. (Mensh and Mensh, 1991)

This is very easily seen with how such tests are constructed. The biases go back to the beginning of standardized testing—the first one being the SAT. The tests’ constructors had an idea of who was or was not ‘intelligent’ and so constructed the tests to show what they already ‘knew.’

…as one delves further … into test construction, one finds a maze of arbitrary steps taken to ensure that the items selected — the surrogates of intelligence — will rank children of different classes in conformity with a mental hierarchy that is presupposed to exist. (Mensh and Mensh, 1991)

Garrison (2009: 5) states that standardized tests “exist to assess social function” and that “Standardized testing—or the theory and practice known as “psychometrics” … is not a form of measurment.” The same way tests were constructed in the 1900s is the same way they are constructed today—with arbitrary items and a presuppossed mental hiearchy which then become baked into the tests by virtue of how they are constructed.

IQ-ists like to say that certain genes are associated with high intelligence (using their GWASes), but what could the argument possibly be that would show that variation in SNPs would cause variation in ‘intelligence’? What would a theory of that look like? How is the hereditarian hypothesis not a just-so story? Such tests were created to justify the hierarchies in society, the tests were constructed to give the results that they get. So, I don’t see how genetic ‘explanations’ are not just-so stories.

1 Blacks and whites are different cultural groups.

2 If (1), then they will have different experiences by virtue of being different cultural groups.is

3 So blacks and whites, being different cultural groups, will score differently on tests of ability, since they are exposed to different knowledge structures due to their different cultures and so, all tests of ability are culture-bound. Knowledge, Culture, Logic, and IQ

Rushton and Jensen (2005) claim that the evidence they review over the past 30 years of IQ testing points to a ‘genetic component’ to the black-white IQ gap, relying on the flawed Minnesota study of twinsreared apart” (Joseph, 2018)—among other methods—to generate heritability estimates and state that “The new evidence reviewed here points to some genetic component in Black–White differences in mean IQ.” The concept of heritability, however, is a flawed metric (Bailey, 1997; Schonemann, 1997; Guo, 2000; Moore, 2002; Rose, 2006; Schneider, 2007; Charney, 2012, 2013; Burt and Simons, 2014; Panofsky, 2014; Joseph et al, 2015; Moore and Shenk, 2016; Panofsky, 2016; Richardson, 2017). That G and E interact means that we cannot tease out “percentages” of nature and nurture’s “contribution” to a “trait.” So, one cannot point to heritability estimates as if they point to a “genetic cause” of the score gap between blacks and whites. Further note that the gap has closed in recent years (Dickens and Flynn, 2006; Smith, 2018).

And now, here is another argument based on the differing experiences that cultural groups experience which then explains IQ score differences (eg Mensh and Mensh, 1991; Manly et al, 1997; Kwate, 2001; Fagan and Holland, 2002, 2007; Cole, 2004; Ryan et al, 2005; Boone et al, 2007; Au, 2008; Hilliard, 2012; Au, 2013).

(1) If children of different class levels have experiences of different kinds with different material; and
(2) if IQ tests draw a disproportionate amount of test items from the higher classes; then
(c) higher class children should have higher scores than lower-class children.

Eugenics

2750 words

Yesterday on Twitter, biologist of The Selfish Gene (Dawkins, 1976) fame Richard Dawkins set off a firestorm on Twitter with a tweet about eugenics (since it just so happened to be Galton’s birthday yesterday).

Deploring the idea means we should not do it—what ‘value’ would there be in breeding humans to jump higher or run faster? Such ideas and the push for them is the mask for eugenic policies—eugenics can and will slip in through the back door using current technologies.


Adapting an argument from Walter Glannon in Genes and Future People (Glannon, 2001: 109):

Where case (A) is CRISPR modifications; case (U) is eugenics; and (B), (C), … (N) are intermediaries.

(1) Case (A) is acceptable.
(2) But cases (B), (C), … (N) … are unacceptable.
(3) Cases (A) and (U) are assimilable, so they are differences in degree, which fall along a continuum of the same type.
(4) If case (A) is permitted, then it will lead to a precedent to allow case (U).
(5) Permitting case (A) will cause cases (B), (C), (N), and … .
(6) Thus, case (A) should be impermissible.

Glannon (2001: 109) rejects (3) stating that “treatment and enhancement are different in kind, not merely degree, and they can correspond to distinct aims that can be articulated.”

Glannon (2001:110) rejects (5) also, stating that “if case (a) is not relevantly similar to cases (b) through (n), then it is unlikely that (a) would cause (b) through (n) to occur. Hence premise (5) is false as well.” (See Govier, 1985 for these argument forms as well, mainly the feasibility argument.)

The problem with his rejection of (3) is that differences of degree can combine to become significant. So if case (A) is similar to (B)-(N), then, since differences of degree can combine to become significant, then allowing (A) will lead to (U) down the line.

… all treatments are enhancements (though not all enhancements are treatments), and … not all ehancements are, by definition alone, ethically unacceptable. (Baylis, 2019: 59)

But if the treatments (which are all enhancements) will, eventually, lead to the psychological slippery slope to accepting eugenics, then we should not do it. “Yea, the treatments were fine. Now they want to prevent this group from doing X and that group from doing Y—what’s the big deal? It’s similar to enhancement, is it not?”

If it is fine to fix a mutation in a gene in a somatic cell, then why not edit the germline so that that individual’s future kin won’t be subjected to that? It would be a waste of time—and money—to keep editing the same family’s somatic cells when they can just edit the germline and get it over with, right?

Now, some may cry “Slippery slope fallacy!” But just crying “Fallacy!” at me does not cut it—one must show that (U) does not follow from (A) and (B)-(N).

The argument provided above is a psychological slippery slope argument. Psychological slippery slope arguments—different from a logical slippery slope argument, where once a first step is taken, one is logically committed to taking subsequent steps unless there are logical reasons to avoid taking such steps—which is based on probability, that is, they are inductive. A psychological slippery slope argument is where once one practice is accepted, similar practices, too, will be accepted as they see no significant difference between them. So accepting one practice, psychologically prepares one to accept another, so we are looking at what may happen, not what the rules and logic of the assertion entail logically.

So if we allow X (gene therapy, negative eugenics) then we will ride down the slippery slope to Y (positive eugenics, genetic enhancement).


When Dawkins says that eugenics ‘works’, what does that mean? That it is possible to attempt to select for certain traits in non-human animals, then it is, therefore, possible for humans? I don’t know who (sanely) can deny this—in theory—but how would it work in practice? Whether it’s state-mandated eugenics (like, for instance, policing who has babies with who) or attempting gene editing in humans, we do not know what would occur in the future (we don’t know what environments would look like in the future so how would we select for traits that would be beneficial in an unknown environment?). Though, I can see eugenicists attempting to use some shoddy GWAS data, such as ‘Look at Hill et al (2019), these genes are associated with high-income, so if we edit/add them then others will have the ability for high-income too!’ (The high-income genes must be doing really well for the rich, as the world’s 2,153 billionaires own more wealth than 4.6 billion people—60 percent of the world’s population.)

Dawkins, though, seems to be forgetting a few things:

(1) When humans breed animals and attempt to select for certain traits, the environments are as uniform as possible.
(2) What would it mean to ‘work’? Is it independent of ideological/moral questions? If, and only if, there could be a definition of ‘favorable result’, or ‘success’ that is divorced from value could eugenics ‘work’ and be a ‘success’ or ‘favorable result.’
(3) Eugenics is a value-laden ideology. Science is—supposedly—value-free. So by that definition, eugenics is not science, it is a social movement.

Humans selectively bred dogs from wolves, and now we have the gamut from big and tall, to short and fat and short and small with many different phenotypes in between. But look at the pugs. We breed them for the flatter, ‘cuter’ face but what does that do for them? Pugs have what is calledBrachycephalic airway obstruction syndrome (BAOS) or Brachycephalic Obstructive Airway Syndrome (BOAS) occurs in all breeds with significant brachycephaly. Brachycephaly is abnormally short head shape (compared with the ancestral, natural, head shape of dogs) with, in some cases, greatly shortened upper jaws and noses.” This can lead to asphyxiation of the dog. What do ya know? Something unintended (possible asphyxiation) occurred due to what we selected for (shorter heads, flatter faces). Who’s to say what would happen if we attempted to select for ‘income genes’ (or whatever else) in humans?

Even prenatal screening can be used to get eugenics in through the back door (Thomas and Rothman, 2016; also see Duster, 2003). I have argued—for and against—the use of PGD (preimplantation genetic diagnosis) back in 2018. I have also covered eugenic laws in America and throughout the world during the 20th century. Allowing (A) will lead us right back to (U)—where were in the 20th century. Selecting against or for certain traits/genes may lead to unintended consequences (like the breathing problems that pugs have). So, why should we do things to humans if we don’t know the consequences of what we are selecting for or against?


Wilson (2017: 46) describes how value-laden eugenics is, stating that it is not “merely theoretical”, nor “primarily mathematical”:

Identifying eugenics as applied science may be thought to imply very little, saying only that eugenics does not fall under the contrasting mythical category of “pure science.” But the labeling of eugenics as applied science should be taken not so much to register a location on the putative divide between pure and applied science as to distinguish eugenics from a certain idealization of scientific inquiry. It signals three things that eugenics is not, and never was: it is not merely theoretical, not primarily mathematical or statistical, and not value-free.

First, eugenics is not merely theoretical, in the sense of being concerend primarily with abstract or idealized conditions (cf. theoretical physics or theoretical biology). It is focused on, and very motivated by, perceived problems in real-world human populations and their solution.

Second, eugenics is not primarily mathematical or statistical in nature, however much it may at times draw from or rely on mathematical techniques or results. Galton himself was an accomplished mathematician, inventing several statistical techniques, such as the quantified idea of a standard deviation and the use of regression lines in statistics, which remain with us today. Galton’s most prominent successors in the United Kingdom—Karl Person and Ronald Fisher—were also statistically sophisticated innovators who led a biometric wing to the eugenics movement. While the quantitative measures of both individuals and populations has played an important role in the short history of eugenics, much eugenic work bears no closer relationship to the underlying statistics than does the bulk of contemporary, biological, cognitive, and social sciences.

Third, eugenics is not value-free science, and doesn’t purport to be: it is deeply and often explicitly value-laden. I want to take a little more time to explain this dimension to the applied nature of eugenics, for doing so will take us to some core aspects of eugenics as a mixture of applied science and social movement.

[…]

First, the evaluative judgments that The Eugenic Mind rests on go well beyond those for traits, behaviors, and characteristics whose desirability or undesirability can be properly taken for granted. Second, eugenic thinking presumes that there are more desirable and more undesirable—better or worse—kinds or sorts of people. For this reason, the primary way in which eugenics has sought to improve the quality of human lives over generational time has been for advocating for ideas and policies that promote there being a greater proportion of better kinds or sorts of people in future generations.

To illustrate the first of these points, many eugenic policies were either explicitly stated in terms of, or implicitly relied on, a positive valuation of high intelligence and a negative valuation of low intelligence, especially as measured by standard IQ tests, such as the Stanford-Binet. While this positive valuation od intelligence is still widely shared in our society when expressed abstractly, as a part of science that aims to inform and shape what sorts of people there should be in future generations, it is a value judgment that is significantly more questionable than that concerning avoiding pain and suffering. Eugenic thinking and practice also rested on assessments of a broader range of personality and dispositional tendencies—for example, clannishness, cheerfulness, laziness, honesty, criminality—not only whos transmissibility across generations was considered controversial but whos very existence as intrinsic traits and tendencies has never had substantial scientific support.

Likewise, turning to the second point, the eugenic thinking that informed immigration policy in the United States following the First World War held that people of different races or ethnicities were differentially desirable as immigrants coming into the country. This differential valuation was applied to groups such as Poles, Greeks, Italians, Jews, and Slavs. Thus, eugenic immigration policies aimed to promote the influx of immigrants who were viewed as more desirable in nature, and to restrict the immigration of those deemed to be of inferior stock. We now question whether such groups of people are properly thought of as more or less desirable sorts of people to produce future generations of American nationals. But we also rightly wonder whether these are sorts of people, in the relevant sense, at all.

The eugenicists of the 20th century advocated laws, policy, ideas, and practices to ensure that the ‘right people’ would have more children (positive eugenics) while, at the same time, aiming to lower the birth rates of the ‘wrong people’ (negative eugenics) or both.

Connecticut, for instance in 1896, enacted a law stating that no man or woman who is epileptic or feebleminded could marry or live together if the woman is under 45 years old. Indiana, in 1907, passed sterilization laws for criminals, rapists, and those with incurable diseases. And by 1914, at least half of the states in America barred marriages if one of the participants had a mental defect. The SCOTUS case Buck v. Bell made the sterilization of the ‘feebleminded’ constitutional in 1927 with more than 30 states participating, to different degrees, by the 1930s. Then by the 30s, more than 12000 sterilizations were carried out, with at least 7500 occurring in California; then by the 60s, more than 63000 sterilizations were carried out in America. (See Alexander, 2017 for references.)

Eugenic policies can be used for either wing of politics—right, left, or center. Quoting Alexander (2017: 69):

Sterilisation was seen as progressive and an obvious responsibility for a state-organised society with a social conscience. A Swedish doctor writing in 1934 stated that ‘[t]he idea of reducing the number of carriers of bad genes is entirely reasonable. It will naturally be considered within the preventative health measures in socialist community life’ (Burleigh, 2000, p. 366). Sterilisation laws in Sweden stayed in place until the 1970s. Based on a solid biological basis in the power of nature over nurture, eugenics represented the rational response of progressive science-based state control in the light of the social problems contributed by the unfit and the feeble-minded. The Nobel prize-winning Marxist geneticist Herman Muller (1890-1967) declared to the Third International Congress of Eugenics in 1932 ‘[t]hat imbeciles should be sterilized is unquestionable’. In 1935, Muller envisaged that, through selective breeding, within a century most people could have ‘the innate qualities of such men as Lenin, Newton, Leonardo, Pasteur, Beethoven, Omar Khayyam, Pushkin, Sun Yat Sen, Marx, or even to possess their varied faculties combined’ (Muller, 1936, p. 113).

Lastly, here is a story of a girl who survived eugenics:

In this landmark legal case decided in 1996 by Madame Justice Joanne Veit, eugenics survivor Leilani Muir successfully sued the province of Alberta for wrongful confinement and sterilization relating to her admission to and treatment at the Provincial Training School for Mental Defectives in Red Deer, Alberta, from 1955 until 1965. As a child of ten, Leilani had found herself swept up by the eugenics movement. After being institutionalized, Leilani was sterilized putatively in accord with the Sexual Sterilization Act of Alberta, a law that was in place in the province until 1972. That provincial law, one of only two enacted in Canada’s history, authorized the eugenic sterilization of individuals whose recommendation for sterilization by the medical superintendents of provincial institutions or other state authority figures had been approved by a four-person committee known informally as the “Eugenics Board.” The legal wrongfulness of both Leilani’s institutionalization/confinement and her sterilization that was established in Muir v. Alberta drew attention to many problematic features of how eugenics was practiced in the province, including how the Eugenics Board did its work.

[…]

Leilani was distinctive, and admirably so as I got to know her better, but not different in the way one might expect, given her history. She was, to put it in terms of a concept that structures our perceptions of human variation, as normal as can be. Yet Leilani had been institutionalized at a school for mental defectives for an extended period of time as a child, teenager, and as a young adult; she had also been classified as a “moron”—a term whose colloquial familiarity now might make it surprising to some to learn that it was invented barely 100 years ago by the eugenicist and psychologist Henry Goddard to pick out “higher grade” mental defectives. Classified as a higher-grade mental defectives, Leilani was sterilized putatively in accord with the Sexual Sterilization Act of Alberta. And all of that had further, unexpected, and devastating consequences for Leilani’s post-institutional life.

How did this happen? Leilani was certainly different from the educated, upwardly mobile, middle-class people who populated my snug university surroundings. But she wasn’t that different from the less-educated, often class-stagnant, working-class people with whom I grew up. And, it turned out, she was not different from the many hundreds, if not thousands, of others who were subjected to the very same laws in Alberta. How does this kind of thing happen? (Wilson, 2017: 20, 22)


I don’t know why Dawkins said that it’s one thing to oppose eugenics on “ideological, political, moral grounds”; whether or not it ‘works’ and is a ‘success’ (see caveats above) is irrelevant. As the moral/political arguments against eugenics (and any supposed precursors) outweigh any arguments for ‘utility’. Starting with negative eugenics/gene therapy can and will lead to modification/eugenics. State-mandated? Maybe not. But the attempt to take away choice from an unborn human (on, say, what he wants to do in life, if a parent is trying to ‘select for’ a certain trait or quality that supposedly will lead him down the path to doing X)? Definitely.

Eugenics is morally wrong. Anything that may lead to the slippery slope of eugenics is, then, by proxy, morally wrong. The psychological slippery slope argument provided proves this. The claim that eugenics can ‘work’ implies that our genes are US—that our genes make us who we are (e.g., Plomin, 2018). This is the cost of our gene-worshipping society. Instead of worshipping God, we now worship the gene—the gene god—thinking that our ‘destiny’ is in our genes and that if we choose certain genes—or certain people with the certain genes—then we can guide our society and evolution into something ‘good’ (whatever that means).

Just as traditional thought placed biological forms in the mind of God, so modern thought finds ways of endowing genes with ultimate formative power. (Oyama, 1985)

Herrnstein’s Syllogism

2650 words

1. If differences in mental abilities are inherited, and
2. if success requires those abilities, and
3. if earnings and prestige depend on success,
4. then social standing will be based to some extent on inherited differences among people. (Herrnstein, 1971)

Richard Herrnstein’s article I.Q. in The Atlantic (Herrnstein, 1971) caused much controversy (Herrnstein and Murray, 1994: 10). Herrnstein’s syllogism argued that as environments become more similar and if differences in mental abilities are inherited and that success in life requires such abilities and if earning and prestige depends on success which is required by inheritable mental abilities then social standing will be based, “to some extent on inherited differences among people.” Herrnstein does not say this outright in the syllogism, but he is quite obviously talking about genetic inheritance. One can, however, look at the syllogism with an environmental lens, as I will show. Lastly, Herrnstein’s syllogism crumbles since social class is predictive of success in life when both IQ and social class are equated. So since family background and schooling explains the IQ-income relationship (a measure of success) then Herrnstein’s argument falls.

Note that Herrnstein came to measurement due to being a student of William Sheldon’s somatotyping. “Somatotyping lured the impressionable and young Herrnstein into a world promising precision and human predictability based on the measurement of body parts” (Hilliard, 2012: 22).

  1. If differences in mental abilities are inherited

Premise 1 is simple: “If differences in mental ability are inherited …” Herrnstein is obviously talking about genetic transmission, but we can look at this through a cultural/environmental lens. For example, Berg and Belmont (1990) showed that Jewish children of different socio-cultural backgrounds had different patterns of mental abilities, which were clustered in certain socio-cultural groups (all Jewish), showing that mental abilities are, in large part, culturally derived. Another objection could be that since there are no laws linking psychological/mental states with physical states (the mental is irreducible to the physical—meaning that mental states cannot be transmitted through (physical) genes) then such genetic transmission of psychological/mental traits is impossible. In any case, one can look at cultural transmission of mental abilities and disregard genetic transmission of psychological traits and the argument fails.

We can accept all of the premises of Herrnstein’s syllogism and argue an environmental case, in fact (bracketed words are my additions):

1. If differences in mental abilities are [environmentally] inherited, and
2. if success requires those [environmentally inherited] abilities, and
3. if earnings and prestige depend on [environmentally inherited] success,
4. then social standing will be based to some extent on [enviromnentally] inherited differences among people.

The syllogism hardly changes, but my additions change what Herrnstein was arguing for—environmental, not genetic differences cause success and along with it social standing among groups of people.

The Bell Curve (Herrnstein and Murray, 1994) can, in fact, be seen as an at-length attempt to prove the validity of the syllogism in an empiric matter. Herrnstein and Murray (1994: 105, 108-110) have a full discussion of the syllogism. “As stated, the syllogism is not fearsome” (Herrnstein and Murray, 1994: 105). They go on to state that if intelligence (IQ scores, AFQT scores) is only a bit influenced by genes and if success is only a bit influenced by intelligence then only a small amount of success is inherited (genetically). Note that their measure of “IQ” is the AFQT—which is a measure of acculturated learning, measuring school achievement (Roberts et al, 2000; Cascio and Lewis, 2005).

How much is IQ a matter of genes?“, Herrnstein and Murray ask. They then discuss the heritability of IQ, relying, of course, on twin studies. They claim that the heritability of IQ is .6 based on the results of many twin studies. But the fatal flaw with twin studies is that the EEA is false and, therefore, genetic conclusions should be dismissed outright (Burt and Simons, 2014, 2015; Joseph, 2015; Joseph et al, 2015; Fosse, Joseph, and Richardson, 2015; Moore and Shenk, 2016). Herrnstein (1971) also discusses twin studies in the context of heritability, attempting to buttress his argument. But if the main vehicle used to show that “intelligence” (whatever that is) is heritable is twin studies, why, then, should we accept the conclusions of twin research if the assumptions that make the foundation of the field are false?

Block (1995) quotes Murray’s misunderstanding about heritability in an interview Murray had while making tours for The Bell Curve:

When I – when we – say 60 percent heritability, it’s not 60 percent of the variation. It is 60 percent of the IQ in any given person.” Later, he repeated that for the average person, “60 percent of the intelligence comes from heredity” and added that this was true of the “human species,” missing the point that heritability makes no sense for an individual and that heritability statistics are population-relative.

So Murray used the flawed concept of heritability in the wrong way—hilarious.

So the main point of Herrnstein’s argument is that environments become more uniform for everyone, then the power of heredity will shine through since the environment is uniform—the same—for everyone. But even if we could make the environment “the same”. What does this even mean? How is my environment the same, even if the surroundings are the same, say, if I would react or see something differently than you do on the same thing? The subjectivity of the mental disproves the claim that environments can be “more uniform.” Herrnstein claimed that if no variance in environment exists, then the only thing that can influence success is heredity. This is not wrong, but how would it be possible to equalize environments? Are we supposed to start from square one? Give up the wealth and status of the rich and powerful and “equalize environments” and, according to Herrnstein and the ‘meritocracy’, those who had earnings and prestige, which depended on success which depended on inherited mental abilities would still float to the top.

But what happens when both social class and IQ are equated? What predicts life success? Stephen Ceci reanalyzed the data from Terman’s Termites (the term coined for those in the study) and found something quite different from what Terman had assumed. There were three groups in Terman’s study—group A, B, and C. Groups A and C comprised the top and bottom 20 percent of the full sample in terms of life success. So at the start of the study, all of the children “were about equal in IQ, elementary school grades, and home evaluations” (Ceci, 1996: 82). Depending on the test used, the IQs of the children ranged between 142 to 155, which then decreased by ten points during the second wave due to regression and measurement error. So although group A and C had equivalent IQs, they had starkly different life outcomes. (Group B comprised 60 percent of the sample and enjoyed mediocre life success.)

Ninety-nine percent of the men in the group that had the best professional and personal accomplishments, i.e., group A were individuals who came from professional or business-managerial families that were well educated and wealthy. In contrast, only 17% if the children from group C came from professional and business families, and even these tended to be poorer and less well educated than their group A peers. The men in the two groups present a contrast on all social indicators that were assesssed: group A individuals preferred to play tennis, while group C men preferred to watch football and baseball; as children, the group A men were more likely to collect stamps, shells, and coinds than were the group C men. Not only were the fathers of the group A men better educated than those of group C, but so were their grandfathers. In short, even though the men in group C had equivalent IQs to group A, they did not have equivalent social status. Thus, when IQ is equated and social class is not, it is the latter that seems to be deterministic of professional success. Therefore, Terman’s findings, far from demonstrating that high IQ is associated with real-world success, show that the relationship is more complex and that the social status of these so-called geniuses’ families had a “long reach,” influencing their presonal and professional achievments throughout their adult lives. Thus, the title of Terman’s volumes Genetic studies of Genius, appears to have begged the question of the causation of genius. (Ceci, 1996: 82-83)

Ceci used the Project Talent dataset to analyze the impact of IQ on occupational success. This study, unlike Terman’s, looked at a nationally representative sample of 400,000 high-school students “with both intellectual aptitude and parental social class spanning the entire range of the population” (Ceci, 1996: 85). The students were interviewed in 1960, then about 4,000 were again interviewed in 1974. “For all practical purposes, this subgroup of 4,000 adults represents a stratified national sample of persons in their early 30s” (Ceci, 1996: 86). So Ceci and his co-author, Henderson, ran several regression analyses that involved years of schooling, family and social background and a composite score of intellectual ability based on reasoning, math, and vocabulary. They excluded those who were not working at the time due to being imprisoned, being housewives or still being in school. This then left them with a sample of 2,081 for the analysis.

They looked at IQ as a predictor of variance in adult income in one analysis, which then showed an impact for IQ. “However, when we entered parental social status and years of schooling completed as additional covariates (where parental social status was a standardized score, mean of 100, SD = 10, based on a large number of items having to do with parental income, housing costs, etc.—ranging from low of 58 to high of 135), the effects of IQ as a predictor were totally eliminated” (Ceci, 1996: 86). Social class and education were very strongly related to predictors of adult income. So “this illustrates that the relationship between IQ and adult income is illusory because the more completely specified statistical model demonstrates its lack of predictive power and the real predictive power of social and educational variables” (Ceci, 1996: 86).

The considered high, average, and low IQ groups, about equal size, while examining the regressions of earnings on social class and education within the groups.

Regressions were essentially homogeneous and, contrary to the claims by those working from a meritocratic perspective, the slope for the low IQ group was steepest (see Figure 4.1). There was no limitation imposed by low IQ on the beneficial effects of good social background on earnings and, if anything, there was a trend toward individuals with low IQ actually earning more than those with average IQ (p = .09). So it turns out that although both schooling and parental social class are powerful determinants of future success (which was also true in Terman’s data), IQ adds little to their influence in explaining adult earnings. (Ceci, 1996: 86)

The same was also true for the Project Talent participants who continued school. For each increment of school completed, there was also an effect on their earnings.

Individuals who were in the top quartile of “years of schooling completed” were about 10 times as likely to be receiving incomes in the top quartile of the sample as were those who were in the bottom quartile of “years of schooling completed.” But this relationship does not appear to be due to IQ mediating school attainment or income attainment, because the identical result is found even when IQ is statistically controlled. Interestingly, the groups with the lowest and highest IQs both earned slightly more than average-IQ students when the means were adjusted for social class and education (unadjusted meansat the modal value of social class and education = $9,094, $9,242, and $9,997 for low, average, and hhigh IQ groups, whereas the unadjusted means at this same modal value = $9,972, $9,9292, and $9,9278 for the low, average, and high IQs.) (Perhaps the low IQ students were tracked into plumbing, cement finishing and other well-paying jobs and the high-IQ students were tracked intothe professions, while average IQ students became lower paid teachers. social workers, ministers, etc.) Thus, it appears that the IQ-income relationship is really the result of schooling and family background, and not IQ. (Incidentally, this range in IQs from 70 to 130 and in SES from 58 to 135 covers over 95 percent of the entire population.) (Ceci, 1996: 87-88)

Ceci’s analysis is just like Bowles and Nelson’s (1974) analysis in which they found that earnings at adulthood were more influenced by social status and schooling, not IQ. Bowles and Nelson (1974: 48) write:

Evidently, the genetic inheritance of IQ is not the mechanism which reproduces the structure of social status and economic privilege from generation to generation. Though our estimates provide no alternative explanation, they do suggest that an explanation of intergeneration immobility may well be found in aspects of family life related to socio-economic status and in the effects of socio-economic background operating both directly on economic success, and indirectly via the medium of inequalities in educational attainments.

(Note how this also refutes claims from PumpkinPerson that IQ explains income—clearly, as was shown, family background and schooling explain the IQ-income relationship, not IQ. So the “incredible correlation between IQ and income” is not due to IQ, it is due to environmental factors such as schooling and family background.)

Herrnstein’s syllogism—along with The Bell Curve (an attempt to prove the syllogism)—is therefore refuted. Since social class/family background and schooling explains the IQ-income relationship and not IQ, then Herrnstein’s syllogism crumbles. It was a main premise of The Bell Curve that society is becoming increasingly genetically stratified, with a “cognitive elite”. But Conley and Domingue (2015: 520) found “little evidence for the proposition that we are becoming increasingly genetically stratified.”

IQ testing legitimizes social hierarchies (Chomsky, 1972; Roberts, 2015) and, in Herrnstein’s case, attempted to show that social hierarchies are an inevitability due to the genetic transmission of mental abilities that influence success and income. Such research cannot be socially neutral (Roberts, 2015) and so, this is yet another reason to ban IQ tests, as I have argued. IQ tests are a measure of social class (Ceci, 1996; Richardson, 2002, 2017), and such tests were created to justify existing social hierarchies (Mensh and Mensh, 1991).

Thus, the very purpose of IQ tests was to confirm the current social order as naturally proper. Intelligence tests were not misused to support hereditary theories of social hierarchies; they were perfected in order to support them. The IQ supplied an essential difference among human beings that deliberately reflected racial and class stratifications in order to justify them as natural.9 Research on the genetics of intelligence was far from socially neutral when the very purpose of theorizing the heritability of intelligence was to confirm an unequal social order. (Roberts, 2015: S51)

Herrnstein’s syllogism seems valid, but in actuality, it is not. Herrnstein was implying that genes were the casue of mental abilities and then, eventually, success and prestige. But one can look at Herrnstein’s syllogism from an environmentalist point of view (do note that the hereditarian/environmentalist debate is futile and continues the claim that IQ tests test ‘intelligence’, whatever that is). When matched for IQ—in regard to Terman’s Termites—family background and schooling explained the IQ-income relationship. Further analyses showed that this, again, was the case. Ceci (1996) showed again, replicating Terman’s and Bowles’ and Nelson’s (1974) analyses that social class and schooling, not IQ, explains income’s relationship with IQ.

The conclusion of Herrnstein’s argument can, as I’ve already shown, be an environmental one—through cultural, not genetic, transmission. Such arguments that IQ is ‘genetic’ and, thusly, certain individuals/groups will tend to stay in their social class, as Pinker (2002: 106) states: “Smarter people will tend to float into the higher strata, and their children will tend to stay there.” This, as has been shown, is due to social class, not ‘smarts’ (scores on an IQ test). In any case, this is yet another reason why IQ tests and the research behind them should be banned: IQ tests attempt to justify the current social order as ‘inevitable’ due to genes that influence mental abilities. This claim, though, is false and, therefore—along with the fact that America is not becoming more genetically stratified (Conley and Domigue, 2015)—Herrnstein’s syllogism crumbles. The argument attempts to justify the claim that class has a ‘genetic’ component (as Murray, 2020, attempts to show) but subsequent analyses and arguments have shown that Herrnstein’s argument does not hold.

Nature, Nurture, and Athleticism

1600 words

Nature vs nurture can be said to be a debate on what is ‘innate’ and what is ‘acquired’ in an organism. Debates about how nature and nurture tie into athletic ability and race both fall back onto the dichotomous notion. “Athleticism is innate and genetic!”, the hereditarian proclaims. “That blacks of West African ancestry are over-represented in the 100m dash is evidence of nature over nurture!” How simplistic these claims are.

Steve Sailer, in his response to Birney et al on the existence of race, assumes that because those with ancestry to West Africa consistently have produced the most finalists (and winners) in the Olympics that race, therefore, must exist.

I pointed out on Twitter that it’s hard to reconcile the current dogma about race not being a biological reality with what we see in sports, such as each of the last 72 finalists in the Olympic 100-meter dash going all the way back to 1984 nine Olympics ago being at least half sub-Saharan in ancestry.

Sailer also states that:

the abundant data suggesting that individuals of sub-Saharan ancestry enjoy genetic advantages.

[…]

For example, it’s considered fine to suggest that the reason that each new Dibaba is fast is due to their shared genetics. But to say that one major reason Ethiopians keep winning Olympic running medals (now up to 54, but none at any distance shorter than the 1,500-meter metric mile because Ethiopians lack sprinting ability) is due to their shared genetics is thought unthinkable.

Sailer’s argument seems to be “Group X is better than Group Y at event A. Therefore, X and Y are races”, which is similar to the hereditarian arguments on the existence of ‘race’—just assume they exist.

The outright reductionism to genes in Sailer’s view on athleticism and race is plainly obvious. That blacks are over-represented in certain sports (e.g., football and basketball) is taken to be evidence for this type of reductionism that Sailer and others appeal to (Gnida, 1995). Such appeals can be said to be implicitly saying “The reason why blacks succeed at sport is due to genes while whites succeed due to hard work, so blacks don’t need to work as hard as whites when it comes to sports.”

There are anatomic and physiological differences between groups deemed “black” and “white”, and these differences do influence sporting success. Even though this is true, this does not mean that race exists. Such reductionist claims—as I myself have espoused years ago—do not hold up. Yes, blacks have a higher proportion of type II muscle fibers (Caesar and Henry, 2015), but this does not alone explain success in certain athletic disciplines.

Current genetic testing cannot identify an athlete (Pitsiladis et al, 2013). I reviewed some of the literature on power genotypes and race and concluded that there are no genes yet identified that can be said to be a sufficient cause of success in power sports.

Just because group A has gene or gene networks G and they compete in competition C does not mean that gene or gene networks G contribute in full—or in part—to sporting success. The correlations could be coincidental and non-functional in regard to the sport in question. Athletes should be studied in isolation, meaning just studying a specific athlete in a specific discipline to ascertain how, what, and why works for the specific athlete along with taking anthropomorphic measures, seeing how bad they want “it”, and other environmental factors such as nutrition and training. Looking at the body as a system will take us away from privileging one part over another—while we also do understand that they do play a role but not the role that reductionists believe.

No evidence exists for DNA variants that are common to endurance athletes (Rankinen et al, 2016). But they do have one thing in common (which is an environmental effect on biology): those born at altitude have a permanently altered ventilatory response as adults while “Peruvians born at altitude have a nearly 10% larger forced vital capacity compared to genetically matched Peruvians born at sea level” (Brutasaert and Parra, 2009: 16). Certain environmental effects on biology are well-known, and those biological changes do help in certain athletic events (Epstein, 2014). Yan et al (2016) conclude that “conclude that the traditional argument of nature versus nurture is no longer relevant, as it has been clearly established that both are important factors in the road to becoming an elite athlete.”

Georgiades et al (2017) go the other way and what they argue is clear in the title of their paper “Why nature prevails over nurture in the making of the elite athlete.” They continue:

Despite this complexity, the overwhelming and accumulating evidence, amounted through experimental research spanning almost two centuries, tips the balance in favour of nature in the “nature” and “nurture” debate. In other words, truly elite-level athletes are built – but only from those born with innate ability.

They use twin studies as an example stating that since heritability is greater than 50% but lower than 100% means “that the environment is also important.” But this is a strange take, especially from seasoned sports scientists (like Pitsiladis). Attempting to partition traits into a ‘nature’ and ‘nurture’ component and then argue that the emergence of that trait is due more to genetics than environment is an erroneous use of heritability estimates. It is not possible—nor is it feasible—to separate traits into genetic and environmental components. The question does not even make sense.

“… the question of how to separate the native from the acquired in the responses of man does not seem likely to be answered because the question is unintelligible.” (Leonard Carmichael 1925, quoted in Genes, Determinism and God, Alexander, 2017)

Tucker and Collins (2012) write:

Rather, individual performance thresholds are determined by our genetic make-up, and training can be defined as the process by which genetic potential is realised. Although the specific details are currently unknown, the current scientific literature clearly indicates that both nurture and nature are involved in determining elite athletic performance. In conclusion, elite sporting performance is the result of the interaction between genetic and training factors, with the result that both talent identification and management systems to facilitate optimal training are crucial to sporting success.

Tucker and Collins (2012) define training as the realization of genetic potential, while DNA “control the ceiling” of what one may be able to accomplish. “… training maximises
the likelihood of obtaining a performance level with a genetically controlled ‘ceiling’, accounts for the observed dominance of certain populations in specific sporting disciplines” (Tucker and Collins, 2012: 6). “Training” would be the environment here and the “genetically controlled ‘ceiling'” would be genes here. The authors are arguing that while training is important, training is just realizing the ‘potential’ of what is ‘already in’ the genes—an erroneous way of looking at genes. Shenk (2010: 107) explains why:

As the search for athletic genes continues, therefore, the overwhelming evidence suggests that researchers will instead locate genes prone to certain types of interactions: gene variant A in combination with gene variant B, provoked into expression by X amount of training + Y altitude + Z will to win + a hundred other life variables (coaching, injuries, etc.), will produce some specific result R. What this means, of course, What this means, of course, is that we need to dispense rhetorically with thick firewall between biology (nature) and training (nurture). The reality of GxE assures that each person’s genes interacts with his climate, altitude, culture, meals, language, customs and spirituality—everything—to produce unique lifestyle trajectories. Genes play a critical role, but as dynamic instruments, not a fixed blueprint. A seven- or fourteen- or twenty-eight-year-old is not that way merely because of genetic instruction.

The model proposed by Tucker and Collins (2012) is pretty reductionist (see Ericsson, 2012 for a response), while the model proposed by Shenk (2010) is more holistic. The hypothetical model explaining Kenyan distance running success (Wilbur and Pitsiladis, 2012) is, too, a more realistic way of assessing sport dominance:

fig6

The formation of an elite athlete comes down to a combination of genes, training, and numerous other interacting factors. The attempt to boil the appearance of a certain trait to either ‘genes’ or ‘environment’ and partition them into percentages is an unsound procedure. That a certain group continuously wins a certain event does not constitute evidence that the group in question is a race, nor does it constitute evidence that ‘genes’ are the cause of the outcome between groups in that event. The holistic model of human athletic performance in which genes contribute to certain physiological processes along with training, and other biomechanical and psychological differences is the correct way to think about sport and race. Actually seeing an athlete in motion in his preferred sport is (and I believe always will be) superior to just genetic analyses. Genetic tests also haveno role to play in talent identification” (Webborn et al, 2015).

One emerging concept is that there are many potential genetic pathways to a given phenotype []. This concept is consistent with ideas that biological redundancy underpins complex multiscale physiological responses and adaptations in humans []. From an applied perspective, the ideas discussed in this review suggest that talent identification on the basis of DNA testing is likely to be of limited value, and that field testing, which is essentially a higher order ‘bioassay’, is likely to remain a key element of talent identification in both the near and foreseeable future []. (Joyner, 2019; Genetic Approaches for Sports Performance: How Far Away Are We?)

Athleticism is irreducible to biology (Louis, 2004). Holistic (nature and nurture) will beat the reductionist (nature vs nurture) views; with how biological systems work, there is no reason to privilege one level over another (Noble, 2012), so there is no reason to privilege the gene over the environment, environment over the gene. The interaction of multiple factors explains sport success.

Follow the Leader? Selfish Genes, Evolution, and Nationalism

1750 words

Yet we get tremendously increased phenotypic variation … because the form and variation of cells, what they produce, whether to grow, to move, or what kind of cell to become, is under control of a whole dynamic system, not the genes. (Richardson, 2017: 125)

In 1976 Richard Dawkins published his groundbreaking book The Selfish Gene (Dawkins, 1976). In the book, Dawkins argues that selection occurs at the level of the gene—“the main theme of his book is a metaphorical account of competition between genes …” (Midgley, 2010: 45). Others then took note of the new theory and attempted to integrate it into their thinking. But is it as simple as Dawkins makes it out to be? Are we selfish due to the genes we carry? Is the theory testable? Can it be distinguished from other competing theories? Can it be used to justify certain behaviors?


Rushton, selfish genes, nationalism and politics

JP Rushton is a serious scholar, perhaps most well-known for attempting to use r/K selection theory to explain human behavior (Anderson, 1991). perhaps has the most controversial use of Dawkins’ theory. The main axiom of the theory is that an organism is just a gene’s way of ensuring the survival of other genes (Rushton, 1997). Thus, Rushton’s formulated genetic similarity theory posits that those who are more genetically similar—who share more genes—will be more altruistic toward those with more similar genes even if they are not related and will therefore show negative attitudes to less genetically similar individuals. This is the gene’s “way” of propagating themselves through evolutionary time. Richardson (2017: 9-11) tells us of all of the different ways in which genes are invoked to attempt to justify X.

In the beginning of his career, Rushton was a social learning theorist studying  altruism, even publishing a book on the matter—Altruism, Socialization and Society (Rushton, 1980). Rushton reviews the sociobiological literature and concludes that altruism is a learned behavior. Though, Rushton seems to have made the shift from a social learning perspective to a genetic determinist perspective in the years between the publication of Altruism, Socialization and Society and 1984 when he published his genetic similarity theorySo, attempting to explain altruism through genes, while not part of Rushton’s original research programme, seems, to me, to be a natural evolution in his thought (however flawed it may be).

Dawkins responded to the uses of his theory to attempt to justify nationalism and patriotism through an evolutionary lens during an interview with Frank Miele for Skeptic:

Skeptic: How do you evaluate the work of Irena”us Eibl-Eibesfeldt, J.P. Rushton, and Pierre van den Berghe, all of whom have argued that kin selection theory does help explain nationalism and patriotism?

Dawkins: One could invoke a kind “misfiring” of kin selection if you wanted to in such cases. Misfirings are common enough in evolution. For example, when a cuckoo host feeds a baby cuckoo, that is a misfiring of behavior which is naturally selected to be towards the host’s own young. There are plenty of opportunities for misfirings. I could imagine that racist feeling could be a misfiring, not of kin selection but of reproductive isolation mechanisms. At some point in our history there may have been two species of humans who were capable of mating together but who might have produced sterile hybrids (such as mules). If that were true, then there could have been selection in favor of a “horror” of mating with the other species. Now that could misfire in the same sort of way that the cuckoo host’s parental impulse misfires. The rule of thumb for that hypothetical avoiding of miscegenation could be “Avoid mating with anybody of a different color (or appearance) from you.”

I’m happy for people to make speculations along those lines as long as they don’t again jump that is-ought divide and start saying, “therefore racism is a good thing.” I don’t think racism is a good thing. I think it’s a very bad thing. That is my moral position. I don’t see any justification in evolution either for or against racism. The study of evolution is not in the business of providing justifications for anything.

This is similar to his reaction when Bret Weinstein remarked that the Nazi’s “behaviors” during the Holocaust “were completely comprehensible at the level of fitness”—at the level of the gene.” To which Dawkins replied “I think nationalism may be an even greater evil than religion. And I’m not sure that it’s actually helpful to speak of it in Darwinian terms.” This is what I like to call “rampant adaptationism.”

This is important because Rushton (1998) invokes Dawkins’ theory as justification for his genetic similarity theory (GST; Rushton, 1997), attempting to justify ethno-nationalism from a gene’s-eye view. Rushton did what Dawkins warned against: using the theory to justify nationalism/patriotism. Rushton (1998: 486) states that “Genetic Similarity Theory explains why” ethnic nationalism has come back into the picture. Kin selection theory (which, like with selfish gene theory, Rushton invoked) has numerous misunderstandings attached to it, and of course, Rushton, too, was an offender (Park, 2007).

Dawkins (1981), in Selfish genes in race or politics stated that “It is annoying to find this elegant and important theory being dragged down to the ephemeral level of human politics, and parochial British politics at that.Rushton (2005: 494), responded, stating that “feeling a moral obligation to condemn racism, some evolutionists minimised the theoretical possibility of a biological underpinning to ethnic or national favouritism.


Testability?

The main premise of Dawkins’ theory is that evolution is gene-centered and that selection occurs at the level of the gene—genes that propagate fitness will be selected for while genes that are less fit are selected against. This “genes’-eye view” of evolution statesthat adaptive evolution occurs through differential survival of competing genes, increasing the allele frequency of those alleles whose phenotypic trait effects successfully promote their own propagation, with gene defined as “not just one single physical bit of DNA [but] all replicas of a particular bit of DNA distributed throughout the world.

Noble (2018) discusses “two fatal difficulties in the selfish gene version of neo-Darwinism“:

The first is that, from a physiological viewpoint, it does’t lead to a testable prediction. The only problem is that the central definition of selfish gene theory is not independent of the only experimental test of the theory, which is whether genes, defined as DNA sequences, are in fact selfish, i.e., whether their frequency in the gene pool increases (18). The second difficulty is that DNA can’t be regarded as a replicator separate from the cell (11, 17). The cell, and specifically its living physiological functionality, is what makes DNA be replicated faithfully, as I will explain later.

Noble (2017: 156) further elaborates in Dance to the Tune of Life: Biological Relativity:

Could this problem be avoided by attaching a meaning to ‘selfish’ as applied to DNA sequences that is independent of meanings in terms of phenotype? For example. we could say that a DNA sequence is ‘selfish’ to the extent which its frequency in subsequent generations is increased. This at least would be an objective definition that could be measured in terms of population genetics. But wait a minute! The whole point of the characterisation of a gene as selfish is precisely that this property leads to its success in reproducing itself. We cannot make the prediction of a theory be the basis of the definition of the central element of the theory. If we do that, the theory is empty from the viewpoint of empirical science.

Dawkins’ theory is, therefore “not a physiologically testable hypothesis” (Noble, 2011). Dawkins’ theory posits that the gene is the unit of selection, whereas the organism is only used to propagate the selfish genes. But “Just as Special Relativity and General Relativity can be succintly phrased by saying that there is no global (privileged) frame of reference, Biological Relativity can be phrased as saying that there is no global frame of causality in organisms” (Noble, 2017: 172). Dawkins’ theory privileges the gene as the unit of selection, when there is no direct unit of selection in multi-level biological systems (Noble, 2012).

In The Solitary Self: Darwin and the Selfish Gene, Midgley (2010) states “The choice of the word “selfish” is actually quite a strange one. This word is not really a suitable one for what Dawkins wanted to say about genetics because genes do not act alone.” As Dawkins later noted, “the cooperative gene” would have been a better description, while The Immortal Gene would have been a better title for the book.  Midgley (2010: 16) states that Dawkins and Wilson (in The Selfish Gene and Sociobiology, respectively) “use a very simple concept of selfishness derived not from Darwin but from a wider background of Hobbesian social atomism, and give it a general explanation of all behaviour, including that of humans.” Dawkins and others claim that “the thing actually being selected was the genes” (Midgley, 2010: 47).


Conclusion

Developmental systems theory (DST) explains and predicts more than the neo-Darwinian Modern Synthesis (Laland et al, 2015). Dawkins’ theory is not testable. Indeed, the neo-Darwinian Modern Synthesis (and along with it Dawkins’ selfish gene theory) is dead, an extended synthesis explains evolution. As Fodor and Piattelli-Palmarini (2010a, b) and Fodor (2008) state in What Darwin Got Wrong, natural selection is not mechanistic and therefore cannot select-for genes or traits (also see Midgley’s 2010: chapter 6 discussion of Fodor and Piattelli-Palmarini). (Okasha, 2018 also discusses ‘selection-for- genes—and, specifically, Dawkins’ selfish gene theory.)

Dawkins’ theory was repurposed, used to attempt to argue for ethno-nationalism and patriotism—even though Dawkins himself is against such uses. Of course, theories can be repurposed from their original uses, though the use of the theory is itself erroneous, as is the case with regard to Rushton, Russel and Wells (1984) and Rushton (1997, 1998). Since the theory is itself not testable (Noble, 2011, 2017), it should therefore—along with all other theories that use it as its basis—be dropped. While Rushton’s change from social learning to genetic causation regarding altruism is not out of character for his former research (he began his career as a social learning theorist studying altruism; Rushton, 1980), his use of the theory to attempt to explain why individuals and groups prefer those more similar to themselves ultimately fails since it is “logically flawed” (Mealey, 1984: 571).

Genes ‘do’ what the physiological system ‘tells’ them to do; they are just inert, passive templates. What is  active is the cell—the genome is an organ of the cell and is what is ‘immortal.’ Genes don’t “control” anything; they are used by and for the physiological system to carry out certain processes (Noble, 2017; Richardson, 2017: chapter 4, 5). There are new views of what ‘genes’ really are (Portin and Wilkins, 2017), what they are and were—are—used for.

Development is dynamic and not determined by genes. Genes (DNA sequences) are followers, not leaders. The leader is the physiological system.

Mary Midgley on ‘Intelligence’ and its ‘Measurement’

1050 words

Mary Midgley (1919-2018) is a philosopher perhaps most well-known for her writing on moral philosophy and rejoinders to Richard Dawkins after his publication of The Selfish Gene. Before her passing in October of 2018, she published What Is Philosophy For? on September 21st. In the book, she discusses ‘intelligence’ and its ‘measurement’ and comes to familiar conclusions.

‘Intelligence’ is not a ‘thing’ like, say, temperature and weight (though it is reified as one). Thermometers measure temperature, and this was verified without relying on the thermometer itself (see Hasok Chang, Inventing Temperature). Temperature can be measured in terms of units like kelvin, celsius, and Fahrenheit. The temperature is the available kinetic energy of heat; ‘thermo’ means heat while ‘meter’ means to measure, so heat is what is being measured with a thermometer.

Scales measure weight. If energy balance is stable, so too will weight be stable. Eat too much or too little, then weight gain or loss will occur. But animals seem to have a body set weight which has been experimentally demonstrated (Leibel, 2008). In any case, what a scale measures is the overall weight of an object which is done by measuring how much force exists between the weighed object and the earth.

The whole concept of ‘intelligence’ is hopelessly unreal.

Prophecies [like those of people who work on AI] treat intelligence as a quantifiable stuff, a standard, unvarying, substance like granulated sugar, a substance found in every kind of cake — a substance which, when poured on in larger quantities, always produces a standard improvement in performance. This mythical way of talking has nothing to do with the way in which cleverness — and thought generally — actually develops among human beings. This imagery is, in fact, about as reasonable as expecting children to grow up into steamrollers on the ground that they are already getting larger and can easily be trained to stamp down gravel on roads. In both cases, there simply is not the kind of continuity that would make any such progress conceivable. (Midgley, 2018: 98)

We recognize the divergence of interests all the time when we are trying to find suitable people for different situations. Thus Bob may be an excellent mathematician but is still a hopeless sailor, while Tim, that impressive navigator, cannot deal with advanced mathematics at all. which of them then should be considered the more intelligent? In real life, we don’t make the mistake of trying to add these people’s gifts up quantitatively to make a single composite genius and then hope to find him. We know that planners wanting to find a leader for their exploring expedition must either choose between these candidates or send both of them. Their peculiar capacities grow out of their special interests in topics, which is not a measurable talent but an integral part of their own character.

In fact, the word ‘intelligence’ does not name a single measurable property, like ‘temperature’ or ‘weight’. It is a general term like ‘usefulness’ or ‘rarity’. And general terms always need a context to give them any detailed application. It makes no more sense to ask whether Newton was more intelligent than Shakespeare than it does to ask if a hammer is more useful than a knife. There can’t be such a thing as an all-purpose intelligence, any more than an all-purpose tool. … Thus the idea of a single scale of cleverness, rising from the normal to beyond the highest known IQ, is simply a misleading myth.

It is unfortunate that we have got so used today to talk of IQs, which suggests that this sort of abstract cleverness does exist. This has happened because we have got used to ‘intelligence tests’ themselves, devices which sort people out into convenient categories for simple purposes, such as admission to schools and hospitals, in a way that seems to quantify their ability. This leads people to think that there is indeed a single quantifiable stuff called intelligence. But, for as long as these tests have been used, it has been clear that this language is too crude even for those simple cases. No sensible person would normally think of relying on it beyond those contexts. Far less can it be extended as a kind of brain-thermometer to use for measuring more complex kinds of ability. The idea of simply increasing intelligence in the abstract — rather than beginning to understand some particular kind of thing better — simply does not make sense. (Midgley, 2018: 100-101)

IQ researchers, though, take IQ to be a measure of a quantitative trait that can be measured in increments—like height, weight, and temperature. “So, in deciding that IQ is a quantitative trait, investigators are making big assumptions about its genetic and environmental background” (Richardson, 2000: 61). But there is no validity to the measure and hence no backing for the claim that it is a quantitative trait and measures what they suppose it does.

Just because we refer to something abstract does not mean that it has a referent in the real world; just because we call something ‘intelligence’ and say that it is tested—however crudely—by IQ tests does not mean that it exists and that the test is measuring it. Thermometers measure temperature; scales measure weight; IQ tests….don’t measure ‘intelligence’ (whatever that is), they measure acculturated knowledge and skills. Howe (1997: 6) writes that psychological test scores are “an indication of how well someone has performed at a number of questions that have been chosen for largely practical reasons” while Richardson (1998: 127) writes that “The most reasonable answer to the question “What is being measured?”, then, is ‘degree of cultural affiliation’: to the culture of test constructors, school teachers and school curricula.

But the word ‘intelligence’ refers to what? The attempt to measure ‘intelligence’ is a failure as such tests cannot be divorced from their cultural contexts. This won’t stop IQ-ists, though, from claiming that we can rank one’s mind as ‘better’ than another on the basis of IQ test scores—even if they can’t define ‘intelligence’. Midgley’s chapter, while short, gets straight to the point. ‘Intelligence’ is not a ‘thing’ like height, weight, or temperature. Height can be measured by a ruler; weight can be measured by a scale; temperature can be measured by a thermometer. Intelligence? Can’t be measured by an IQ test.

Blog Stats

  • 636,637 hits
Follow NotPoliticallyCorrect on WordPress.com